text
stringlengths
2.85k
87.5k
Material Jetting Material jetting (MJ) - Material jetting also uses photopolymers, but instead of a vat, MJ jets droplets of material layer by layer from a nozzle onto the build surface in a similar fashion to 2D ink jet printers. Initially, the liquid is heated to roughly 30-60°C to reach an optimum viscosity for jetting the liquid. The print head travels over the build surface, depositing droplets of liquid. The material is then cured by a UV light to solidify and harden the photopolymer. This process is repeated layer by layer. Deposition of droplets in this way results in less waste than either SLA or DLP.MJ requires support structures for any overhangs and usually doesnt require post curing, unlike SLA, due to smaller layer heights. Benefits and Limitations Potentially the largest attraction to MJ is that multiple nozzles can be set up which then jet different plastics (of different colourings of plastic) to either give different properties or colour to different sections of a print. Different colourings can also be mixed to give specific hues. In addition, its common for support structures to be made from a secondary dissolvable material that can be removed by either pressurised water or an ultrasonic bath. Dissolvable support structures like this can leave no mark once removed, maintaining high quality surface finish. Other benefits include lower waste than SLA or DLP due to using jetted droplets rather than a vat (as mentioned earlier). Also, print quality is very high, meaning MJ has very smooth surface finishes and dimensionally accurate prints. MJ shares some of the same disadvantages as SLA and DLP, such as the properties of the plastic making it unsuitable for many applications due to the brittle nature of printed objects. Polymers used are also photosensitive and break down over time and, finally, printing is expensive. Powder bed fusion = Powder bed fusion (PBF) refers to a number of methods where, common for each process, material powder is heated in a chamber and fused a layer at a time. Once a layer is formed, the build platform is lowered by an amount equal to one layer and new powder is spread over previous layers either by a blade or roller. For the case of polymer AM, there are two common methods of powder bed fusion - Selective Laser Sintering (SLS) and Multi Jet Fusion (MJF). Selective Laser Sintering (SLS) - SLS is analogous to SLA, using a moveable laser to selectively sinter polymer powder in layers. Initially, a thin layer of powder covers a platform inside a chamber. The chamber is held just under the melting point of the polymer so that when a laser is applied, the powder begins to melt, sintering and fusing together. Multi Jet Fusion (MJF) MJF, instead of using a laser, uses nozzles to drop a “binding agent” onto the surface of the powder bed. Just like MJ, this is done in a similar way to how 2D printers jet ink. Additional agents can be added to help define boundaries or give specific colour to individual voxels (a 3D pixel). However, currently, the choice of colour is limited. The binding agents will define whether a voxel is part of the structure or will remain as powder. The agents have high absorption of IR radiation, so after the agents are jetted, an IR light passes over the powder bed to locally heat the powder in locationscontaining the binding agent, causing the powder to melt and fuse. Post processing is minimal, unlike other AM methods which require the removal of support structures, and mainly consists of cleaning excess powder. Benefits and Limitations The main benefit of PBF is that there is no need for support structures, as the surrounding powder supports the forming object, enabling more design options to manufacturers. This links to a second benefit, which is that as no material is wasted on support structures and, as powder is reusable, little waste is produced. MJF specifically can more easily and quickly produce a larger number of objects at once by utilising the entire print volume. While it cant match injection moulding for high volumes of objects, at low volume production, MJF is cost equivalent. PBF processes have a number of downsides. Both methods result in rough surface finishes, with roughness depending on powder particle size (but almost no visible layer lines as an upside). This is because powder particles at the edge of a voxel that are being heated by a laser or IR radiation have a reasonable chance to partially sinter, binding to the surface of the desired shape. The choices of material available to SLS and MJF are mostly limited to various nylons, in turn limiting properties. Printed objects also tend to be fairly weak, reducing the potential uses of any printed object. All the PBF methods are also very energy intensive. This is due to having to keep the powder in a heated chamber (so the polymer melts readily when more heat is applied) which must be reheated for each print. This can lead to further problems such as potentially affecting unused powder in the chamber, rendering it unusable. Also, as the prints experience heating and cooling, it is possible for prints to warp. Finally, similarly to the vat photopolymerisation methods, hollow shapes cannot be formed as powder would be unable to drain away in an enclosed shape. Material Extrusion Material extrusion AM methods operate under the principal of using a nozzle to extrude and eject hot plastic onto a build surface. There are two slightly varying methods of material extrusion which are Fused Deposition Modelling (FDM) and Arburg Plastic Freeforming (APF) (translated from the German name - Arburg Kunststoff Freiformen). Fused Deposition Modelling (FDM) FDM is what most people imagine as 3D printing due to being the most common type of additive manufacturing, mainly due to its low cost making it accessible to industry and hobbyists. It also has a relatively large selection of materials available to use, as numerous thermoplastics can be printed using FDM. The most common plastics used are ABS and PLA. FDM uses a spool of plastic that feeds a thread of plastic (known as a filament) through a nozzle. The nozzle is on a print head which has the operation of mechanically forcing the filament through the print head in a cold section, then heating and melting the plastic in a hot section before extruding the plastic through the nozzle. Now melted and extruded, the plastic is printed directly onto the surface of a build in a continuous stream. The print nozzle moves in the x-y plane to control where the plastic is placed. Once the layer is completed, the platform the object is on can move downwards in the z direction by a small amount for the next layer to be built.As the deposited plastic is hot, it is able to fuse to layers below, softening and binding to the surface of the previous layer. This leads to anisotropic properties as this fuse is weaker than the extruded thread of material, meaning printed objects are generally weaker in the z-direction. For more on anisotropy in , see the properties of FDM section Arburg Plastic Freeforming (APF) APF works similarly to FDM with a few notable differences. The first of which is that the plastic supplied is in pellets. These melted pellets are then forced along to a nozzle using a rotating screw, similar to injection moulding. Once in the nozzle, the nozzle periodically opens and closes, letting out individual droplets of melted plastic onto the build surface.. The platform being printed on can move in the xyz directions to control where the droplets are placed. Droplet size can be controlled by changing the nozzle diameter. APF and FDM usually print onto a hot surface (the exact temperature may depend on the plastic being used) to reduce warping (plastic can warp due to shrinkage on rapid cooling), and give better adhesion between the print and print bed. Both process need support structures, so post processing mostly involves the removal of these structures. Some more advanced printers are able to print multiple plastics at once so support structures can be printed in a material, such as PVA, that can be easily removed (usually using water). Further post processing can involve smoothing the surface of the object as especially FDM can give poor surface finish, however surface finish can vary quite a lot between printer models. Benefits and Limitations As mentioned earlier, material extrusion (or specifically FDM) printing is cheap and widely accessible. The common plastics used are also cheap meaning FDM is ideal for amateurs. Also mentioned earlier, material extrusion has many plastics available to it, including polymers infused with other materials. This enables more choice over properties, with each polymer having its own pros and cons when used to print. On the other hand, material extrusion can suffer from lower resolution and worse quality prints than photopolymerisation or PBF processes. Surface finishes are bumpy with usually easily visible layer lines. This can lead to more time post processing to achieve a smooth surface. Binding between layers can be poor, especially for some polymers. To explain what this means for the print, read the section. Print failure can also be quite common. This is a problem for all AM methods, but FDM is susceptible to quite a few potential faults in printing e.g. fast heating then cooling of plastic can lead to thermal contraction as the object is printed, resulting in the print warping. *For more on problems that can occur during printing, you can read this article:* *.* Properties of FDM Prints Additive manufacturing methods currently struggle with recreating the mechanical properties of parts made by traditional production methods, such as injection moulding. This section explores the material properties of FDM prints and what may be causing these properties. Bonding - Fundamental to understanding the properties of FDM prints is understanding how the printed lines of plastic (an individual line also being known as a raster) bond together. Bonding is caused when hot plastic is ejected from the nozzle and is applied to the print surface, partially melting neighbouring plastic and entangling polymer chains. The strength of the bond can be modelled as a temperature-dependant diffusion model, where greater temperature for a longer time results in greater diffusion and, subsequently, greater entanglement. The processes of entangling like this can be referred to as the plastic adhering to itself. This method of bonding has a of couple problems. Firstly, the extent of entanglement of chains will be far less **between** rasters than **within** rasters. Less entanglement means less force is required to break apart the chains so the bonding is weaker. The second problem is that gaps, known as voids, can form and reduce the overall cross-sectional area that contributes to the strength of a build. Voids develop due to the nature of the geometry of the rasters and layers which, when stacked in a print, leave holes from inefficient packing. In addition, printing errors can mean rasters or layers dont fully adhere, also creating voids between lines. The colder a neighbouring line of plastic is to the one being printed, the more likely the bonding between lines will be weaker, with a greater chance of void formation. ![](images/voids.svg) Diagram showing section of printed FDM lines with voids from lack of full adhesion and line geometry The result is that strength parallel to a set of printed lines is high, while perpendicular is weak (so is anisotropic), with values depending on polymer strength and the extent of adhesion between rasters and layers.   ### Infill Pattern To combat anisotropy in the x-y plane, FDM printers print specific structures that have the same strength in both x and y by increasing the strength of one direction while decreasing it in the other. A common structure would be a crosshatch pattern with roughly half the material pointing in the x direction, and half in the y, meaning each direction has equal amounts of the strength from rasters parallel to that direction, and the weak bonding from rasters perpendicular. ![](images/alternating.svg) Diagram showing rasters printed in alternating directions parallel to x and y. No lines are drawn vertically in z direction Below is an SEM picture of an FDM printed sample, showing voids that have formed between between rasters. Also seen through the void is the layer below, which is pointing perpendicular to the viewed top layer. ![SEM of an FDM print](images/semvoid.jpg) SEM picture of an FDM print. A void can be seen in the top layer. Through the void, rasters of the layer below can be seen pointing perpendicular to the top layer. It is worth noting is that most printers arent made to produce objects that are 100% filled with plastic. Instead, a printer will print the outside of a layer as a solid line, and then fill in the gaps in the centre with a predetermined “infill pattern” so a specific percentage of the area inside is filled. This is beneficial as it saves plastic so is more economical, but the structure will lose some of its strength. A reduction in fill percentage will mean a reduction in cross sectional area of plastic providing strength under tension. Therefore, as a general rule, lower fill percentage means lower strength strength (assuming other properties of the print are the same). The magnitude of this decrease will depend on the fill pattern used. The print will also be lighter, which may be important depending on the application of the object. An example set of fill patterns, known as grid patterns, are shown below at different fill percentages, followed by an example vertical cross section showing where the infill would go. ![](images/infill.svg) Diagram showing the shape of infill patterns in the x-y plane ![](images/sample.svg) Diagram showing vertical cross section of a sample The infill pattern can be described by its “raster angle”: the angle between the path of the nozzle and the x-axis of the printing platform during FDM. . For the 30% and 50% infills shown above, the raster angle alternates between ±45° for a layer, then alternates between 45° and 135° for the next. The difference between the patterns is how closely spaced the lines are printed. This isnt the only way to produce a pattern like this, however. The two patterns could be printed by keeping raster angle at 45° for a layer, then at -45° for the next, but this may affect properties. The 90% fill differs by having raster angle alternating between 0° and 90° layer by layer. The exact effect fill pattern has on strength can be quite complex, with many different factors coming into play. For instance, different patterns can result in different adhesion strengths. As mentioned previously, higher temperature of neighbouring filament will increase the extent of adhesion by improving the ability of plastic to entangle. Therefore, a fill pattern that prints lines that can adhere soon after being printed will be hotter when bonding and, therefore, form stronger bonds. An example of such a pattern would be a Hilbert Curve, which has very short times before printed lines are able to bond to each other. ![Neu Bitmap](images/Hilbert-Kurve.png) Diagram of Hilbert curve pattern Print angle - Of course, the pattern will also affect how much material is aligned at particular angles with respect to a loading direction, which can in turn affect strength. It has been observed that rasters at 0° to loading direction are strongest, and then weaken with increasing angle up to 90°. This can be explained by considering how shear and normal stresses change with angle and the resulting force acting parallel and perpendicular to the rasters. At 0°, rasters are loaded parallel to their printed direction where they are strong (failure requires the rasters to break which requires high stress to overcome the high levels of entanglement). As angle increases, a greater proportion of stress is applied perpendicular to the raster, where failure can occur by rasters breaking from each other (often known as delamination). As tensile strength between rasters is significantly lower, failure will occur at lower stresses. At 90°, the force will only act perpendicular to the rasters so strength is at a minimum. ![a](images/angles.svg) Analysis like this can become complex when considering patterns with multiple directions printed, such as the crosshatch and Hilbert curve, which have rasters at 0° and 90°. Rotating print pattern relative to the loading direction will now increase raster angle for some rasters but decrease it for the rest. On rotating 0° or 900, the pattern will have the same strength (raster angle for both cases will be 0°/90° so strength is now symmetrical about rotation by 45° where raster angle is -45°/45°). However, the angle at which the rasters are strongest can vary. One argument is that at 45° (raster angle -45°/45°), all the rasters are under normal and shear stresses so all rasters are at risk of delamination so strength is low, while at 0° (raster angle 0°/90°) half the rasters very easily delaminate as they face perpendicular to the loading direction, but the other half are orientated parallel so produce a strong sample. However, it could be argued for 0° that the perpendicular rasters can fail before the ones parallel, thereby increasing the stress on the parallel rasters from the reduction of cross-sectional area, leading the parallel rasters to fail as well, meaning a print with raster angles at 0° and 90° could be the weakest orientation. Sometimes, it can even be seen that strength doesnt vary much with angle. Ultimately, which direction is strongest would depend on many variables, including adhesion strength, type of infill pattern and infill percentage. ![a](images/double.svg) While these patterns help prevent anisotropy in the x-y plane, they do not combat anisotropy in the z direction as, in all cases for FDM, no lines are printed vertically so strength is entirely dependent on adhesion between layers, making the z direction significantly weaker than the other two. ![](images/fail.svg) Diagram showing two samples fracturing, one printed so rasters are parallel to loading direction (printed flat and loaded in the x/y direction), and one printed so rasters are facing perpendicular to loading direction (printed vertically then loaded in the z direction). The may be useful to read as it goes into further detail about similar concepts described here. Choice of material Choice of material can also have large effects on properties, and may even cause different challenges on printing. PLA is the most commonly used plastic for FDM printers mainly due to ease of printing. As the glass transition (*T*g) and melting (*T*m) temperature are relatively low (~60°C and ~160°C respectfully), nozzle temperature can be held quite low on printing (~215°C) meaning risk of warping is reduced. PLA is also biodegradable so, being made from plant starch, is an environmentally friendly choice of material. However, PLA is quite brittle, and its low *T*g means it is unsuitable for high temperature applications. The second most common plastic used is ABS. ABS has a *T*g of ~105°C, making it more suitable for higher temperature applications. However, this comes at a cost. ABS must be printed at a higher temperature than PLA so cools more rapidly when printing and, therefore, ABS is more likely to warp or crack during a print. Overall, ABS is more ductile, durable, and can be used at higher temperatures than PLA, which can make it a desirable choice of plastic in some cases. However, despite having similar tensile strengths to PLA when made using other methods, ABS doesnt adhere to itself as well as PLA, therefore meaning the tensile strength of ABS prints are usually lower than equivalent PLA prints. Simulation Below is a simulation that shows a stress strain curve for an FDM printed plastic object under tension where you can choose certain variables about the print. The values shouldnt be taken as exact as many variables are not being considered here which may affect properties. Variables such as print speed, print temperature, and even the exact batch of plastic can all affect the final properties of a print. The simulation also shows prints at 45° being weaker, but this may not be the case for all printers, plastics or infill patterns. *(When rotating print direction, the infill pattern is still printed at the same raster angle relative to the print bed, meaning it changes the angle relative to the print body so effectively gives a new raster angle when loading the sample e.g. a print with original raster angles ±45° rotated by 45° now has **effective** raster angles of 0°/90°).*  For comparison, general purpose PLA is quoted at having a tensile strength between 47-70MPa while injection moulded ABS has tensile strength 42-46MPa *(CES EduPack)*. Additive manufacturing other materials Additive manufacturing methods extends past polymers, also being available to metals and other materials using processes that are fairly analogous to those used for polymers. A list of these processes with brief descriptions is given below. Metal - * Fused Deposition Modelling, **FDM** + This is directly analogous to FDM for polymers, except using molten metal instead.* Selective Laser Melting, **SLM** + Type of powder bed fusion. Uses a laser to melt and bind metal in a powder bed. Analogous to SLS.* Electron Beam Melting, **EBM** + Type of powder bed fusion. Like SLS but uses an electron beam to melt the powder instead, and therefore, this process must be done in a vacuum.* Laser Engineering Net Shape, **LENS** + Type of direct energy deposition, DED. This can be seen as a mix between FDM and soldering. Metal wire is supplied to a build where is it then heated and melted by a laser, locally depositing metal onto the build surface* Electron Beam Additive Manufacturing, **EBAM** + Type of DED. Similar to LENS but an electron beam is used to heat the metal instead and therefore, this process must be done in a vacuum.* Binder Jetting, **BJ** + Binding agent added to metal powder, sticking the powder together. The bound powder is then later sintered once out of the powder bed. This is fairly similar to MJF without the use of IR radiation to immediately sinter the object.* Nano Particle Jetting, **NPJ** + Metal particles dissolved in a solvent liquid are applied to a print surface by nozzles. The object is heated immediately, evaporating the solvent leaving the metal particles. The object is then sintered later. The printing process is similar to MJ, but requires extra sintering and doesnt use UV. Other Materials - * Fused Deposition Modelling, **FDM** + Same process as for metals and polymers. Material filament is melted and extruded through a nozzle.* Paste Extrusion Modelling, **PEM** + Very similar to FDM but used for materials that are a paste at room temperature, such as cement paste. The printing works in the same way except there is no heating element. Also, instead if filament, simply a paste supply is extruded through a nozzle onto the print surface.* Binder Jetting, **BJ** + Similar to metal BJ and polymer MJF. Binding agent droplets are applied layer by layer to a material powder bed, causing powder to stick. Generally used for sand or gypsum.* Drop on demand, **DOD** + DOD is a type of material jetting (MJ) and is similar to the MJ process for polymers. Hot material is jetted dropwise through nozzles, layer by layer onto the print surface. Material then solidifies on cooling. An example material used for this is wax.* Laminated Object Manufacturing, **LOM** + Nozzles apply an adhesive to the top of a build surface. A new layer of material is then *laminated* onto the previous layer, bound by the adhesive. The layer is then cut to give the correct cross section by a knife, laser or wire. The process is repeated to produce an object. The table below shows a summary of processes that are similar to those used for polymers. Polymer | Metal | Ceramic/Other || MJF | BJ | BJ | | SLS | SLM / EBM | | | FDM | FDM | FDM / PEM | | APF | | | | MJ | NPJ | DOD | | SLA | | | | DLP | | | | | | LOM | | | LENS / EBAM | | And the following table organises each method into broader AM methods. Powder Bed Fusion | Direct Energy Deposition | Material Extrusion | Binder Jetting | Material Jetting | Photo-polymerisation | Sheet Lamination || MJF | LENS | All FDM types | Metal BJ | Polymer MJ | SLA | LOM | | SLS | EBAM | Sand or gypsum BJ | NPJ | DLP | | | SLM | | APF | DOD | | | | | EBM | | | | | | | Looking to the future, the number of materials available to additive manufacturing will continue to increase. One such material is organic matter (printing organic matter is sometimes known as bio printing), with ambitions of printing tissue to test drugs, and entire organs potentially for transplants.   Summary = tr:nth-child(even) {background: #fff} tr:nth-child(odd) {background: #ccc} table { border: 1px solid black; border-collapse: collapse; } td { border: none; /\*text-align: center; \*/ } tr.trow { background: #FFF; } .hedd { background:#444; color:white;} Prints have separate resolution in the x-y plane (determined by minimum movement of a printing nozzle/laser) and z direction (determined by layer height). This is then separate from minimum feature size which is determined by diameter of the nozzle/laser point. To help distinguish the various methods of polymer additive manufacturing, the benefits, limitations and usage of each are summarised in the table below.| | | | | | - | - | - | - | | **Type of AM** | **Benefits** | **Downsides** | **Best for** | | **Material Extrusion (geared towards FDM)** | * Cheap * Many materials available | * Bad quality print * Significant anisotropic effects * Good chance of warping from heat | * Amateur use * Rapid prototyping | | **Photopolymerisation** | * High quality prints * Isotropic | * Very limited materials available * Lots of post-processing * No hollow sections * Photosensistive | * Functional prototyping * Moulds | | **Material Jetting** | * Multiple materials possible at once * Good quality prints * Very smooth surface finish | * Very expensive * Limited materials * Photosensitive | * Rapid prototyping * Coloured, custom models * Moulds | | **Powder Bed Fusion** | * No support structures * Mass production easier with MJF * Relatively good quality prints * Limited post-processing | * Limited materials available * Rough surface finish * No hollow sections * Chance of warping from heat | * Functional prototyping * Custom end-use models |It is important to note that all additive manufacturing methods struggle to control properties with the precision and variety that traditional methods have. Limited material pool size and printing effects such as the anisotropy in FDM prints prevent additive manufacturing being seen as a better alternative to existing methods. Combined with the fact that additive manufacturing is generally quite slow and unable to produce products in bulk, AM is mostly being used in pre-production for prototyping, to produce parts that are used to make moulds which then allow fast production, and in printing one-off custom objects such as hearing aids (which require specific shapes to fit in the users ear). Looking specifically at FDM, properties are highly anisotropic and can vary greatly depending on variables such as infill percentage, fill pattern used and print direction. How you create a print should be considered when making a product with its purpose in mind so these variables can be chosen appropriately. Questions = ### Select the correct order of the AM manufacturing process.### Quick questions*You should be able to answer these questions without too much difficulty after studying this TLP. If not, then you should go through it again!*1. Which of the following processes uses material extrusion to print? | | | | | - | - | - | | | a | Arburg Plastic Freeforming, APF | | | b | Selective Laser Sintering, SLS | | | c | Material Jetting, MJ | | | d | Digital Light Processing, DLP |### Deeper questions*The following questions require some thought and reaching the answer may require you to think beyond the contents of this TLP.*2. How does Stereolithography bond layers together? | | | | | - | - | - | | | a | Powder heats up, melting and fusing with the layer below | | | b | Newly deposited material is hot, partially melting the surface of the previous layer and entangles polymer chains, fusing | | | c | The previous layer is left in a green state so that the surface is left partially reacted. When the new layer is cured, the previous layer also reacts and binds | | | d | New layers are bound by adhesive which is applied between each layer as its printed | 3. Which of the following prints of a tensile test sample would you expect to have the highest ultimate tensile strength? | | | | | - | - | - | | | a | PLA, 90% fill, loaded in z direction | | | b | PLA, hollow, printed at 45° to the x direction and loaded in that direction. | | | c | ABS, 30% fill, loaded in x direction | | | d | PLA, 30% fill, printed on its side and loaded in the x direction. | 4. You wish to print a hollow section for a prototype. Which method of printing should you use? | | | | | - | - | - | | | a | Stereolithography, SLA | | | b | Electron Beam Melting, EBM | | | c | Multi Jet Fusion, MJF | | | d | Fused Deposition Modelling, FDM | 5. Which factor usually limits resolution in FDM? | | | | | - | - | - | | | a | Layer height | | | b | Minimum movement of nozzle in x-y plane | | | c | Nozzle diameter | | | d | Print temperature | 6. What is a key limitation that applies to all additive manufacturing processes? | | | | | - | - | - | | | a | Expensive | | | b | Lack of control over material properties | | | c | Printing has visible layer lines | | | d | Warping on printing | Going further = **Books** *Additive Manufacturing Technologies: 3D Printing, Rapid Prototyping, and Direct Digital Manufacturing*, I.Gibson**Other** is a good site which goes into a little more detail about using 3D printing. I recommend watching videos of each process to fully grasp manufacturing, and also viewing images of objects made by each process to see how final print quality differs. ###
Aims On completion of this TLP you should: * Understand the basic principles of atomic force microscopy (AFM), including the different modes it can be used in. * Understand how AFM can be used in materials science. * Be aware of some of the problems that can be encountered, and how to overcome them. Before you start You should have a basic understanding of the behaviour of  to understand how the piezo-scanner works in AFM. Introduction Atomic force microscopy (AFM) is part of the family of techniques known as scanning probe microscopy, and has proved itself extremely valuable and versatile as an investigative tool. The AFM invented by Gert Binnig and others in the mid 1980s differed in many ways from todays instruments, but its basic principles remain the same. Binnig had already received the Nobel Prize in Physics for his creation of the scanning tunnelling microscope (STM), and the first AFMs in fact relied on an integrated STM tip. But the AFM had a major advantage over STM; it could be used for insulating as well as conducting samples. Over the years, AFM has already had a significant impact in many disciplines, from surface science to biological and medical research. Because of its ability to image samples on an atomic scale, it has been vital to the advance of nanotechnology. ![Image of surface of a thin film of GaN](images/GaN.jpg) This AFM image shows the surface of a thin film of GaN. The surface morphology is dominated by terraces and steps. The step heights are approximately 0.25 nm, corresponding to one layer of gallium and nitrogen atoms. This illustrates the ability of AFM to measure very small height changes on surfaces. ![a topographic AFM image of a collagen fibril](images/collagen.jpg) The figure above is a topographic AFM image of a collagen fibril. The fibril is the striped structure running diagonally across the middle of the image. The periodicity of the narrow stripes or bands seen in the image is 64 nm. AFM can be used to image biological samples such as collagen without requiring a conductive coating to be added. It is even possible to take images of live cells in a fluid environment In simple terms, the atomic force microscope works by scanning a sharp probe over the surface of a sample in a raster pattern. By monitoring the movement of the probe, a 3-D image of the surface can be constructed. Below is a schematic diagram of an AFM. Tip Surface Interaction = When the tip is brought close to the sample, a number of forces may operate. Typically the forces contributing most to the movement of an AFM cantilever are the *coulombic* and *van der Waals* interactions. * **Coulombic interaction:** This strong, short range repulsive force arises from electrostatic repulsion by the electron clouds of the tip and sample. This repulsion increases as the separation decreases. * **Van der Waals interactions:** These are longer range attractive forces, which may be felt at separations of up to 10 nm or more. They arise due to temporary fluctuating dipoles. The combination of these interactions results in a force-distance curve similar to that below: ![Graph of force against distance](images/force graph.png) Plot of force against distance As the tip is brought towards the sample, van der Waals forces cause attraction. As the tip gets closer to the sample this attraction increases. However at small separations the repulsive coulombic forces become dominant. The repulsive force causes the cantilever to bend as the tip is brought closer to the surface. There are other interactions besides coulombic and van der Waals forces which can have an effect. When AFM is performed in ambient air, the sample and tip may be coated with a thin layer of fluid (mainly water). When the tip comes close to the surface, *capillary forces* can arise between the tip and surface. These effects are summarised in the animation below. It is also possible to detect other forces using the AFM, such as magnetic forces to map the magnetic domains of a sample. Modes of Operation AFM has three differing modes of operation. These are contact mode, tapping mode and non-contact mode. Contact mode In contact mode the tip contacts the surface through the adsorbed fluid layer on the sample surface. The detector monitors the changing cantilever deflection and the force is calculated using Hookes law: F = − *k* x     (*F* = force, *k* = spring constant, *x* = cantilever deflection) The feedback circuit adjusts the probe height to try and maintain a constant force and deflection on the cantilever. This is known as the *deflection setpoint*. Tapping mode In tapping mode the cantilever oscillates at or slightly *below* its resonant frequency. The amplitude of oscillation typically ranges from 20 nm to 100 nm. The tip lightly “taps” on the sample surface during scanning, contacting the surface at the bottom of its swing. Because the forces on the tip change as the tip-surface separation changes, the resonant frequency of the cantilever is dependent on this separation. \[\omega = \omega\_0 \sqrt{ 1 - \frac{1}{k} \frac{\mathrm{d}F}{\mathrm{d}z} }\] The oscillation is also damped when the tip is closer to the surface. Hence changes in the oscillation amplitude can be used to measure the distance between the tip and the surface. The feedback circuit adjusts the probe height to try and maintain a constant amplitude of oscillation i.e. the *amplitude setpoint*. Non-contact mode In non-contact mode the cantilever oscillates near the surface of the sample, but does not contact it. The oscillation is at slightly *above* the resonant frequency. Van der Waals and other long-range forces decrease the resonant frequency just above the surface. This decrease in resonant frequency causes the amplitude of oscillation to decrease. In ambient conditions the adsorbed fluid layer is often significantly thicker than the region where van der Waals forces are significant. So the probe is either out of range of the van der Waals forces it attempts to measure, or becomes trapped in the fluid layer. Therefore non-contact mode AFM works best under ultra-high vacuum conditions. Comparison of modes - | | | | | - | - | - | | | **Advantage** | **Disadvantage** | | Contact Mode | * High scan speeds * Rough samples with extreme changes in vertical topography can sometimes be scanned more easily | * Lateral (shear) forces may distort features in the image * In ambient conditions may get strong capillary forces due to adsorbed fluid layer * Combination of lateral and strong normal forces reduce resolution and mean that the tip may damage the sample, or vice versa | | Tapping Mode | * Lateral forces almost eliminated * Higher lateral resolution on most samples * Lower forces so less damage to soft samples or tips | * Slower scan speed than in contact mode | | Non-contact Mode | * Both normal and lateral forces are minimised, so good for measurement of very soft samples * Can get atomic resolution in a UHV environment | * In ambient conditions the adsorbed fluid layer may be too thick for effective measurements * Slower scan speed than tapping and contact modes to avoid contacting the adsorbed fluid layer | The Scanner = The scanner moves the probe over the sample (or the sample under the probe) and must be able to control the position extremely accurately. In most AFMs are used to achieve this. These change dimensions with an applied voltage. The diagram below shows a typical scanner arrangement, with a hollow tube of piezoelectric material and the controlling electrodes attached to the surface. ![Diagram of a typical piezo scanner cut into two parts](images/piezo.png) Diagram of a typical piezo scanner (cut into two parts). Separate pairs of electrodes control movement in the x, y and z directions Tip and Cantilever The cantilever is a long beam with a tip located at its apex. In most AFMs the motion of the tip is detected by reflecting a laser off the back surface of the cantilever. Tip - The tip is generally pyramidal or tetrahedral in shape, and usually made from silicon or silicon nitride. Silicon can be doped and made conductive, allowing a tip-sample bias to be applied for making electrical measurements. Silicon nitride tips are not conducting. The geometry of the tip greatly affects the lateral resolution of the AFM, since the tip-sample interaction area depends on the tip radius. The radius of the apex of a new tapping mode tip is around 5–15 nm, but this increases quickly with wear. In general the sharper the tip, the higher the resolution of the AFM image. Cantilever For contact mode AFM the cantilever needs to deflect easily without damaging the sample surface or tip. Therefore it should have a low spring constant, this is achieved by making it *thin* (0.3–2 μm). It also needs a high resonant frequency to avoid vibrational instability, so is typically *short* (100–200 μm). V-shaped cantilevers are often used for contact mode as these can provide low resistance to vertical deflection, whilst resisting lateral torsion. ![Optical microscopy image of a triangular cantilever](images/cantilever_triangular.jpg) Optical microscopy image of a triangular cantilever For tapping mode AFM a high spring constant is required to reduce noise and instabilities. Rectangular cantilevers are often used for tapping mode. ![Optical microscopy image of a rectangular cantilever](images/cantilever_rectangular.jpg) Optical microscopy image of a rectangular cantilever Detection of cantilever deflection There are a number of ways to detect the deflection of the cantilever in an AFM. The most common method is using a laser beam. A *diode laser* is focused onto the back reflective surface of the cantilever, and reflects onto a photodetector. This is position sensitive, and usually has four sectors. The vertical deflection of the cantilever is determined by the difference in light intensity measured by the upper and lower sectors. It is also possible to measure the lateral deflection of the cantilever by the difference between the left and right sectors of the photodetector; this technique is known as (LFM). ![Diagram showing how the deflection of the cantilever is measured](images/cantilever deflection.png) How the deflection of the cantilever is measured Feedback When the tip contacts the surface directly the tip and/or surface may be damaged. If the tip is blunted or damaged, then the imaging capability of the AFM is reduced. Soft surfaces (e.g. on biological samples) can also be easily damaged. In almost all operating modes, a feedback circuit is connected to the deflection sensor and attempts to keep the tip–sample interaction constant by controlling the tip–sample distance. This protects both the tip and the sample. Either the cantilever deflection (in static mode) or oscillation amplitude (in dynamic mode) is monitored by the feedback circuit, which attempts to keep this at a setpoint value by adjusting the z height of the probe. The height of the probe is what is recorded to produce a topographic image. In practice however feedback is never perfect, and there is always some delay between measuring a change from the setpoint and restoring it by adjusting the scanning height. In tapping mode for example this can be measured by the difference between the instantaneous amplitude of oscillation and the amplitude setpoint. This is known as the amplitude error signal, and highlights changes in surface height. | | | | - | - | | Topography map | Amplitude error | | Graph of the topography through a slice of the acquired image | Graph of the amplitude error through a slice of the acquired image | | Example images showing the relationship between topography and amplitude error signal. The two line plots demonstrate a slice through the acquired image. | The feedback system is affected by three main parameters: * *Setpoint* – this is the value of the deflection or amplitude that the feedback circuit attempts to maintain. This is usually set such that the force on the cantilever is small, but the probe remains engaged with the surface. * *Feedback gains* – the higher these are set, the faster the feedback system will react. However if the gains are too high then the feedback circuit can become unstable and oscillate, causing high frequency noise in the image. * *Scan rate* – scanning the probe over the surface more slowly gives the feedback circuit more time to react and results in better tracking, but this increases the time needed to acquire an image. Scanner Related Artefacts = There are a number of problems and artefacts that can arise during atomic force microscopy. This page and the following pages will discuss some of them, and how they can be overcome. Hysteresis The piezoelectrics response to an applied voltage is not linear. This gives rise to *hysteresis*. Since the scanner makes more movement per volt at the beginning of a scan line than at the end, this can cause artefacts in the images, especially at large scan sizes. This is overcome by using a non-linear voltage waveform calculated during a calibration procedure. ![Example of a voltage waveform calibrated to overcome hysteresis](images/waveform.png) Example of a voltage waveform calibrated to overcome hysteresis Scanner creep - If the applied voltage suddenly changes e.g. to move the scanning position, then the piezo-scanners response is not all at once. It moves the majority of the distance quickly, then the last part of the movement is slower. If this is done during scanning, then the slow movement will cause distortion. This is known as *creep*. ![When a change in x-offset is applied, features are distorted in the x-direction](images/scanner_x_offset.jpg) When a change in x-offset is applied, features are distorted in the x-direction ![When a change in y-offset is applied, features are distorted in the y-direction](images/scanner_y_offset.jpg) When a change in y-offset is applied, features are distorted in the y-direction ![Image showing effect of abrupt change in scan size](images/scanner_size.jpg) The scan size is changed abruptly, and features are distorted Bow and tilt Because of the construction of the piezo-scanner, the tip does not move in a perfectly flat plane. Instead its movement is in a parabolic arc, as shown in the image below. This causes the artefact known as *scanner bow*. Also the scanner and sample planes may not be perfectly parallel, this is known as *tilt*. Both of these artefacts can be removed by using post-processing software. ![Diagram of scanner bow](images/scanner bow.png) Diagram of scanner bow Tip Related Artefacts = For densely packed features the tip size can cause errors in determining the heights and the sizes of the “islands” or the overall appearance of the surface. Sidewall angles of the tip can also lead to inaccurate lateral resolution measurements for high aspect ratio features. The tip may pick up loose debris from the sample surface. This may be reduced by cleaning the sample with compressed air or N2 before use. Or the tip can be damaged during scanning, which degrades the images. This may be blunting of the tip, as shown in the SEM image below: ![SEM image of bluneted tip](images/blunted_tip.jpg) Below is an example of an image taken with a severely damaged tip. The shape due to tip damage appears several times over the image, effectively the sample is imaging the tip rather than the other way round. | | | | - | - | | Sample imaged with sharp tip | The same sample imaged using a severely damaged tip | | Sample imaged with sharp tip | The same sample imaged using a severely damaged tip | One easy way to check for tip artefacts is to rotate the sample (**not** just the scanning direction) by 90 degrees. This is demonstrated in the following animation: Other Artefacts = Feedback related The feedback is supposed to keep the tip-sample interaction at a fixed setpoint by adjusting the z height of the probe, as discussed earlier. However if the scan speed across the sample is fast, then the feedback may not be able to react quickly enough and tracking is poor. This can be seen by comparing the trace and retrace (forward and backward direction) for a single line in the scan. The following image shows the height and amplitude trace (white) and retrace (yellow) when tracking is good. The height trace and retrace are almost identical, and the amplitude retrace is a mirror image of the trace because it is in the opposite direction. ![Image of trace and retrace](images/trace_retrace_small.png) When tracking is poor, the trace and retrace of height no longer overlap. Blurred images result. This can happen because the gains are set too low, or the scan speed is too high. ![Image showing poor tracking](images/poor_tracking.jpg) The images below are examples of poor tracking | | | | - | - | | | | | Topography | Amplitude error | With sharp slopes, poor tracking may result in overshoot giving rise to “comet tails” in the image. The following images show indium aluminium nitride with small balls of indium on the surface. On the left the gains are set high enough for the scan rate, and tracking is good. On the right the gains are too low for the scan rate, and the tracking is poor. This results in overshooting off the edges of the indium dots, appearing in the image as comet tails. This can also be seen as the trace and retrace not overlapping. | | | | - | - | | | | | Good tracking | Poor tracking, resulting in “comet tails | However if the gains are set too high, then the feedback circuit can begin to oscillate. This causes high frequency noise. ![Amplitude error image for a scan with the gains set too high](images/gains_too_high.gif) Amplitude error image for a scan with the gains set too high The precise values used for feedback gains will vary between instruments. A good rule of thumb is to increase the gain until excess noise begins to appear, and then reduce it slightly to get good tracking with low noise. Vibrations AFMs are very sensitive to external mechanical vibrations, which generally show up as horizontal bands in the image. ![Evidence of external vibrations](images/vibrations.jpg) Evidence of external vibrations in an amplitude error image These vibrations may be transmitted through the floor, for example from footsteps or the use of a lift. These can be minimised by the use of a vibrational isolation table, and locating the AFM on a ground floor or below. Acoustic noise such as people talking can also cause image artefacts, as can drafts of air. An acoustic hood can be used to minimise the effects of both of these. ![](images/acoustic_hood_open.jpg) Acoustic hood open ![](images/acoustic_hood_closed.jpg) Acoustic hood closed Summary = Atomic force microscopy may be used to image the micro- and nano-scale morphology of a wide range of samples, including both conductive and insulating materials, and both soft and hard materials. Successful imaging requires optimisation of the feedback circuit which controls the cantilever height, and an understanding of the artefacts which may arise due to the nature of the instrument and any noise sources in its immediate environment. Despite these issues, atomic force microscopy is a powerful tool in the emerging discipline of nanotechnology. Questions =*Note: This animation requires Adobe Flash Player 8 and later, which can be .* ### Quick questions*You should be able to answer these questions without too much difficulty after studying this TLP. If not, then you should go through it again!*1. Which operating mode allows for the fastest scanning speeds? | | | | | - | - | - | | | a | Contact mode | | | b | Tapping mode | | | c | Non-contact mode |### Deeper questions*The following questions require some thought and reaching the answer may require you to think beyond the contents of this TLP.*2. If high frequency noise is seen in an image, what should be done? | | | | | - | - | - | | | a | Increase the feedback gains | | | b | Decrease the feedback gains | | | c | Change the tip | | | d | Recalibrate the AFM | Going further = ### Books * Meyer, Hug and Bennewitz, *Scanning Probe Microscopy: The Lab on a Tip*, Springer, 2003 Websites * , SPM Principles (NT-MDT) including animations * (Pacific Nanotechnology)] * (nanoHUB.org)] – a presentation/podcast introducing AFM
Aims On completion of this TLP you should: * Understand the concept of anisotropy, and appreciate that the *response* (e.g. displacement) need not be parallel to the *stimulus* (e.g. force) * Understand the nature of anisotropic behaviour in a range of properties, including electrical and thermal conductivity, diffusion, dielectric permittivity and refractive index, and be aware of a range of everyday examples * Be familiar with the use of representation surfaces Before you start There are no specific prerequisites for this TLP, but you will find it useful to have a basic knowledge of crystal structures, as this will enable a better understanding of the structural origins of anisotropy. Take a look at the and the . Introduction Some physical properties, such as the density or heat capacity of a material, have values independent of direction; they are *scalar* properties. However, in contrast, you will see that many properties vary with direction within a material. For example, thermal conductivity relates heat flow to temperature gradient, both of which need to be specified by direction as well as magnitude - they are *vector* quantities. Therefore thermal conductivity must be defined in relation to a direction in a crystal, and the magnitude of the thermal conductivity may be different in different directions. A perfect crystal has long-range order in the arrangement of its atoms. A solid with no long-range order, such as a glass, is said to be *amorphous*. Macroscopically, every direction in an amorphous structure is equivalent to every other, due to the randomness of the long-range atomic arrangement. If a physical property relating two vectors were measured, it would not vary with orientation within the glass; i.e. an amorphous solid is *isotropic*. In contrast, crystalline materials are generally *anisotropic*, so the magnitude of many physical properties depends on direction in the crystal. For example, in an isotropic material, the heat flow will be in the same direction as the temperature gradient and the thermal conductivity is independent of direction. However, as will be demonstrated in this TLP, in an anisotropic material heat flow is no longer necessarily parallel to the temperature gradient, and as a result the thermal conductivity may be different in different directions. The occurrence of anisotropy depends on the symmetry of the crystal structure. Cubic crystals are isotropic for many properties, including thermal and electrical conductivity, but crystals with lower symmetry (such as tetragonal or monoclinic) are anisotropic for those properties. Many (but not all) physical properties can be described by mathematical quantities called *tensors*. A non-directional property, such as density or heat capacity, can be specified by a single number. This is a *scalar*, or *zero rank tensor*. Vector quantities, for which both magnitude and direction are required, such as temperature gradient, are *first rank tensors*. Properties relating two vectors, such as thermal conductivity, are *second rank tensors*. Third and higher rank tensor properties also exist, but will not be considered here, since the mathematical descriptions are more difficult. Mechanical analogy of anisotropic response A *stimulus* (such as a force or an electrical field) does not necessarily induce a response (such as a displacement or a current) parallel to it. This can be demonstrated with a simple mechanical model, consisting of a mass supported by two springs. ![Diagram of mechanical model](images/image01.gif) Mechanical model: a mass supported by two springs A force, *F*, applied at an angle *θ*, to the central mass acts as the stimulus. The response is the displacement, *r*, of the mass, at an angle *φ*. ![Diagram showing applied force and resulting displacement](images/image02.gif) Diagram showing applied force and resulting displacement For second rank tensor properties in anisotropic materials, parallel responses occur along orthogonal directions known as the *principal directions*. The following photographs show the response of the model under the application of various forces. (Click on an image to view a larger version.) | | | | - | - | | Model with no force applied | Model with horizontal force producing horizontal displacement (parallel response) (*θ* =  *φ* = 90º) | | Model with vertical force producing vertical displacement (parallel response) (*θ* =  *φ* = 0º) | Model with 45º displacement from non-45º force (non-parallel anisotropic response) (*θ* = approx 35º, *φ* = 45º) | Note that the displacement of the mass is only parallel to the force when the force acts parallel or perpendicular to the springs. These are the directions of the *principal axes*. ### Symmetry As a rule, the symmetry present in crystalline materials (such as mirror planes and rotational axes) determines or restricts the orientation of the principal axes. In this model, there exist two orthogonal mirror planes perpendicular to the plane of the model, one parallel and the other perpendicular to the springs, and a third mirror plane exists in the plane of the model. The principal axes lie along the intersections of these mirror planes. Real crystals typically show more complicated symmetry, but the orientation of the principal axes is still determined by the main symmetry elements. Anisotropic properties may be analysed by resolving onto these principal axes. The symmetry elements of any physical property of a crystal must include the symmetry elements of the of the crystal (Neumanns Principle). Thus crystals that, for example, display spontaneous polarization (see later section on anisotropic dielectric permittivity) can belong to only a few symmetry classes. It is worth noting that the absence of a centre of symmetry does not necessarily imply anisotropic second rank tensor properties, nor does the presence of a centre of symmetry rule out anisotropy in such properties. Anisotropic thermal conductivity When a temperature gradient is present in a material, heat will always flow from the hotter to the colder region to achieve thermal equilibrium. As mentioned in the introduction, thermal conductivity is the property that relates heat flow to the temperature gradient. In an isotropic material: \[J = k{{dT} \over {dr}}\] where J = heat flow, k = thermal conductivity, and dT/dr = temperature gradient. ### Anisotropic thermal conductivity in quartz**.** In quartz, perpendicular to the c-axis, the thermal conductivity is 6.5 Wm-1K-1. However, the thermal conductivity parallel to c is 11.3 Wm-1K-1. The anisotropic thermal conductivity of quartz can easily be seen using a simple demonstration. Two sections cut from a quartz crystal, one perpendicular to the c-axis, and one parallel to it, are in turn mounted as shown in the diagram below. Pieces of plastic containing a heat sensitive liquid crystal are then glued to the top surfaces and the sections are heated from a point at their centre, using a soldering iron. As the quartz heats up, the heat sensitive film changes colour, which allows us to see how quickly the heat is conducted away from the centre. The colours indicate contours of constant temperature. ![Diagram of experimental apparatus](images/image07.gif) Diagram of experimental apparatus When heating the section cut perpendicularly to the c-axis, the observed shape is a circle, showing that the thermal conductivity is the same in all directions in this plane. However, when using the section cut parallel to the c-axis, the shape seen is an ellipse, which shows that the thermal conductivity in this plane is direction-dependent. Your browser does not support the video tag.Video of a section of quartz cut perpendicular to the c axis being heated from a point at its centre Your browser does not support the video tag.Video of a section of quartz cut parallel to the c axis being heated from a point at its centreThe heat flow does not have to be parallel to the thermal gradient. A result of this can be seen by considering one-dimensional conduction in a long rod and a thin plate, both made of the same anisotropic material, arranged so that the normal to the plate and the length of the rod are oriented in an arbitrary general direction. ### Thin Plate ![Diagram of thin plate](images/image09.gif) Here the geometry of the set-up constrains the temperature gradient to be perpendicular to the plate. Due to the anisotropic nature of the material, the heat flux, **J**, will be in the direction shown, say. However, the thermal *conductivity* perpendicular to the plate is defined as the component of the heat flux parallel to the temperature gradient, j||, divided by the magnitude of that gradient. Thus: \[{k\_{||}} = {{{j\_{||}}} \over {gradT}}\] ### Rod ![Diagram of rod](images/image10.gif) Now the heat must flow along the rod, and the temperature gradient will be in a different direction, as shown. Here the thermal *resistivity* is defined as the component of the temperature gradient parallel to the rod, *gradT*||, divided by the magnitude of the heat flux. Thus: \[{\rho \_{||}} = {{grad{T\_{||}}} \over J}\] where ρ is the resistivity. It is important to realise that in anisotropic materials \[{\rho \_{||}} \ne {\raise0.7ex\hbox{$1$} \!\mathord{\left/ {\vphantom {1 {{k\_{||}}}}}\right.\kern 0.01em} \!\lower0.7ex\hbox{${{k\_{||}}}$}}\] except along the principal axes. Only in isotropic materials is the resistivity always the reciprocal of the conductivity, and vice versa. *Note*: By using a large, thin plate and a long rod, the effects of the alterations in the directions of heat flow and temperature gradient close to the edges (of the plate) or ends (of the rod) - "edge effects" and "end effects" - affect only a very small proportion of the sample and can be ignored. Derivation of the anisotropy ellipsoid The variation of anisotropic properties such as conductivity can conveniently be illustrated by a "representation surface". In many cases this is an ellipsoid. Suppose a three-dimensional temperature gradient, *gradT*, lies along a direction specified by direction cosines l, m and n>, where, for example, l is the cosine of the angle between the x-axis and the temperature gradient vector. Then the components of the temperature gradient parallel to the principal axes will be: \[grad{T\_x} = (gradT)l grad{T\_y} = (gradT)m grad{T\_z} = (gradT)n\] The components of the heat flux are: \[{j\_x} = {k\_1}(gradT)l {j\_y} = {k\_2}(gradT)m {j\_z} = {k\_3}(gradT)n\] where k1, k2 and k3 are the values of thermal conductivity along the principal axes, x, y and z, and are called the principal values. Hence, resolving back along the direction of the temperature gradient, the heat flux is: \[{j\_{||}} = {j\_x}l + {j\_y}m + {j\_z}n = ({k\_1}{l^2} + {k\_2}{m^2} + {k\_3}{n^2})gradT\] Thus the value of the thermal conductivity, klmn, defined by \[{k\_{lmn}} = {{{j\_{||}}} \over {gradT}}\] is related to the principal values and the directional cosines by: \[k = {k\_1}.{l^2} + {k\_2}.{m^2} + {k\_3}.{n^2}\] \[l = {\raise0.7ex\hbox{$x$} \!\mathord{\left/ {\vphantom {x r}}\right.\kern 0.01em} \!\lower0.7ex\hbox{$r$}}\;\;\; m = {\raise0.7ex\hbox{$y$} \!\mathord{\left/ {\vphantom {y r}}\right.\kern 0.01em} \!\lower0.7ex\hbox{$r$}}\;\;\; n = {\raise0.7ex\hbox{$z$} \!\mathord{\left/ {\vphantom {z r}}\right.\kern 0.01em} \!\lower0.7ex\hbox{$r$}}\]Substituting in our equation for k gives: \[k = {{{k\_1}{x^2}} \over {{r^2}}} + {{{k\_2}{y^2}} \over {{r^2}}} + {{{k\_3}{z^2}} \over {{r^2}}} = {1 \over {{r^2}}}({k\_1}{x^2} + {k\_2}{y^2} + {k\_3}{z^2})\] Setting \[r = {\raise0.7ex\hbox{$1$} \!\mathord{\left/ {\vphantom {1 {\sqrt k }}}\right.\kern 0.01em} \!\lower0.7ex\hbox{${\sqrt k }$}}\] then k>1*x*2 + k2*y*2 + k3*z*2 = 1 If all the principal values are positive (as they must be for thermal conductivity), then this equation describes the surface of an ellipsoid. The general equation of an ellipsoid (with semi-axes a,b,c) is: \[{{{x^2}} \over {{a^2}}} + {{{y^2}} \over {{b^2}}} + {{{z^2}} \over {{c^2}}} = 1\] Thus for this *representation ellipsoid*, the semi-axes are: \[{1 \over {\sqrt {{k\_1}} }},{1 \over {\sqrt {{k\_2}} }},{1 \over {\sqrt {{k\_3}} }}\] The radius of this ellipsoid in a general direction is equal to the value of \({1 \over {\sqrt {{k}} }}\)in that direction. Thus the value of k in a particular direction - the ratio of the component of the heat flow in that direction to the magnitude of the temperature gradient in that direction - can easily be calculated from the radius in that direction. ![Diagram of ellipsoid](images/image11.gif) An equivalent representation surface exists for electrical conductivity, and a similar representation surface exists for refractive index - the optical indicatrix. Both of these are discussed later in this TLP. ### Using the representation surface for thermal conductivity As shown above, the representation surface for thermal conductivity is an ellipsoid with semi-axes \({1 \over {\sqrt {{k\_1}} }}\), \({1 \over {\sqrt {{k\_2}} }}\) and \({1 \over {\sqrt {{k\_3}} }}\). The distance between the centre of the ellipsoid and a point, P, on its surface, is equal to \({1 \over {\sqrt {{k}} }}\) at this point. As well as determing the conductivity, the representation surface can be used to relate the directions of heat flow and the temperature gradient. If the temperature gradient is applied radially from the centre of the ellipsoid, then the direction of resulting heat flow is perpendicular to the tangential plane constructed from the point at which the thermal gradient meets the surface of the ellipsoid. In an isotropic material, the representation surface will be a sphere, and the heat flow will be in the same direction as the temperature gradient. However, in an anisotropic material, heat flow is no longer necessarily parallel to the temperature gradient. We will now revisit the **anisotropic thermal conductivity of quartz**. Because of the crystal symmetry of quartz, *k*1 = k2 ![](images/equation17.gif) k3, and so the representation surface for the thermal conductivity of quartz is a uniaxial ellipsoid of revolution. Depending on the relative values of k1 and k3, this is a shape either like a rugby ball (k3 < k1) or a Smartie (k3 > k1). Consider sections of the ellipsoid: 1. *Perpendicular to the c-axis:* Since k1 = k2, this section is a circle, and the direction of heat flow is parallel to the temperature gradient. ![Diagram of heat flow](images/image13.gif) 2. *Perpendicular to the b-axis (i.e. parallel to the c-axis).* Since k1![](images/equation17.gif) k3, this section is an ellipse, and the direction of heat flow is no longer parallel to the temperature gradient, except in the directions of the principal axes (which here correspond to the semi-axes of the ellipse). ![Diagram of heat flow](images/image14.gif) The direction of the heat flux is always parallel to the normal to the tangential plane drawn at the point at which the direction of the temperature gradient intersects the representation ellipsoid. This is called the radius-normal property. . Anisotropic electrical conductivity = The current density, ***J***, is related to the electric field, ***E***, by ***J***= *σ**E*** In an analogous way to thermal conductivity, the current density does not have to be parallel to the electric field. Three different examples of anisotropic electrical conductivity are described here. These show how the anisotropy is related to the crystal structure. In metals, conductivity occurs by transport of delocalised electrons through the crystalline lattice, under the influence of an applied electric field. The conductivity is limited by the scattering of the electrons by imperfections in the periodicity of the structure (vibrations, impurities, etc). Because of the high symmetry in cubic metals, the overall drift velocity is parallel to the electric field, i.e. there is an isotropic response. However in hexagonally "close packed" metals, the nature of the symmetry in the crystalline array allows the conductivity to be anisotropic. For example in cadmium, it varies from 1.3 x 107Sm-1 along the six-fold axis to 1.5 x 107 Sm-1 perpendicular to that axis. Graphite consists of layered planes of carbon atoms with a structure as shown below. The layers are stacked above one another in a staggered fashion, the spacing between layers being about 2.3 times the distance between the adjacent carbon atoms in a layer. ![Diagram of graphite structure](images/image16.gif) The Hexagonal Structure of Graphite Planes Here the hexagonal carbon rings provide the delocalised electrons, allowing easy conduction within the planes. Conduction is very much less perpendicular to the planes (around three orders of magnitude smaller) - this is highly anisotropic behaviour. The structure also creates anisotropy in other properties of graphite, such as thermal conductivity and thermal expansion. This planar anisotropy is also seen in high temperature superconductors like BiSrCaCuO. The copper oxide "ab" planes provide superconducting pathways for electrons, but such pathways are not available perpendicular to the planes. Anisotropic diffusion = The rate of diffusion of a specific atomic species is measured in terms of the coefficient of diffusion, *D*, which relates the flux of atoms (number crossing unit area in unit time) to the concentration gradient. In an isotropic material: J = -D(*gradc*) where J = flux (number per unit area per unit time) of an atomic species across a plane normal to the concentration gradient, *gradc*. The negative sign indicates that the flux is from high to low concentrations. For an isotropic material, such as an amorphous solid, the diffusion coefficient is independent of temperature. Flux and concentration gradient are both vectors, so the coefficient of diffusion is a second rank tensor in an anisotropic material. The diffusion is anisotropic and can be described by three principal values of D, in the same way as thermal and electrical conductivity. (*Note*: we are not concerned here with the dependence of diffusion on temperature). ### Example: Diffusion of Ni in Olivine, (Mg,Fe)2SiO4 Olivine is the name for a series of minerals between two end members, fayalite (Fe2SiO4) and forsterite (Mg2SiO4). The two minerals form a solid solution where the iron and magnesium atoms can be substituted for each other without significantly changing the crystal structure. Olivine has an orthorhombic structure. The lattice parameters depend on the precise composition, but a typical set of values is: a = 0.49 nm, b = 1.04 nm, c = 0.61 nm. The principal values of D for diffusion of nickel atoms in olivine also depend on the precise composition of the olivine. One set of values for an unspecified composition at 1423 K (1150ºC) is: Dx = 4.40 x 10-18 m2s-1 Dy = 3.35 x 10-18 m2s-1 Dz = 124.0 x 10-18 m2s-1 where the values correspond to the diffusion coefficients along the *x*, *y* and *z* crystallographic axes respectively. The exact values are unimportant for this discussion, but it is important to appreciate that diffusion occurs much faster parallel to the *z*-axis than in directions in the plane perpendicular to it. This happens because of the way in which the atoms are arranged in the crystal structure, a plan view of which is shown below (projected down the *x*-axis). Note that x = 0, 25, 50 and 75 represent the *x*-coordinates of the atoms (as a percentage of the unit cell dimension *a*).Crystal structure of olivine (click on image to view a larger version) As you can see, there are chains of M2+ sites (where M2+ represents a metal ion, in this case either Mg or Fe) parallel to the *z*-axis. Diffusion occurs by Ni2+ substituting for M2+ along these chains, making diffusion in this direction much faster than in any other. Your browser does not support the video tag.Rotatable model of the olivine structure### Fast ion conduction When the structure of a solid material contains a large number of vacant sites (as a consequence of its composition), then it is likely to show high ionic mobility for some species of ion (sometimes an anion, sometimes a cation), even at modest temperatures. A high ionic mobility means that charge can be transferred very easily, and conductivities can approach those of aqueous electrolyte solutions or molten salts. For a material to be a fast ion conductor, there should be: * a high concentration of charge carriers * a high concentration of vacant sites in the structure * a low activation energy for ionic migration There are a number of different types of fast ion conductors. The material may be either a cationic or an anionic conductor, depending on the charge of the mobile ion. The material may also have a fully ionic structure, or the mobile ions may be in a covalent host structure. The dimensionality of the mobility can also vary: the ions may move through channels (1D), within layers of a structure (2D), or throughout the whole structure (3D). ### Examples of fast ion conductors #### 1D fast ion conductors: Tungsten Bronzes, MxWO3 In these materials, WO3 tetrahedra or octahedra form a covalent network. Usually either M+ or M2+ (for example Na+ or Cu2+) are the mobile ions, and these move along channels in the structure. As a result, the conductivity is very high in this direction. An example of a tungsten bronze is shown below, projected along the *z*-axis, which is parallel to the channel direction. ![Structure of tungsten bronze](images/image18.gif) Structure of tungsten bronze #### 2D fast ion conductor: Sodium beta-alumina, Na β-Al2O3 Sodium beta-alumina consists of blocks of γ-alumina (which has a spinel structure, the details of which need not be considered here) connected by a layer of bridging oxygen and sodium ions. Not all the Na+ sites are occupied, and conduction occurs by the movement of the ions within this layer. ![](images/image19.gif) Structure of sodium beta-alumina Anisotropic dielectric permittivity = When an electric field, ***E***, is applied to a dielectric solid, positive and negative charges are displaced in opposite directions within the solid, creating *polarisation*, ***P***. This is defined as the net dipole moment per unit volume. (An electrical dipole is created by a small separation of equal and opposite charges.) In an isotropic material, these vectors are related by: ***P*** = (*e* - 1)*e*o***E*** *e*o is the permittivity of free space, and *e* is the relative dielectric permittivity (a scalar constant in this case). As with the other examples, in anisotropic materials this scalar has to be replaced by a tensor. Often the occurrence of highly anisotropic dielectric permittivity is associated with *ferroelecticity* (spontaneous polarisation reversible by an electric field) and *pyroelecticity* (temperature dependent generation of polarisation). ### Example: barium titanate The high temperature form of BaTiO3 has the cubic perovskite structure with a primitive cubic lattice. At 150°C, a = 0.401 nm. In the temperature range 0ºC to 120ºC, BaTiO3 is tetragonal. At 100°C it has *a*= *b* = 0.400 nm and *c* = 0.404 nm. | | | | - | - | | Your browser does not support the video tag.Video of cubic perovskite structure | Your browser does not support the video tag.Rotatable model of cubic perovskite structure | | Your browser does not support the video tag.Video of tetragonal perovskite structure | Your browser does not support the video tag.Rotatable model of tetragonal perovskite structure |The tetragonal-cubic phase transition is highlighted in the following video. It shows a thin section of barium titanate viewed between crossed-polars, which is heated through the transition temperature, and then allowed to cool naturally. Initially, the sample is below the transition temperature, and since the domains of the anisotropic tetragonal phase exhibit birefringence, it is brightly coloured when viewed between crossed-polars. When the sample reaches the transition temperature, the isotropic cubic phase forms, which appears black. The heat source is then removed, so the sample cools down and again undergoes a phase transition to return to the anisotropic tetragonal phase.Your browser does not support the video tag. Video of barium titanate phase transitionIn the tetragonal form, the Ti ion is displaced by a small distance, from the centre of the surrounding octahedron of nearest neighbour oxygen ions, along the *z*-direction. A spontaneous polarisation along the *z*-axis is generated, but by symmetry there is no polarisation in the *x-y* plane. Note that the polarsation can be orientated forwards or backwards along the tetragonal axis. This polarisation is easily changed by applying an electric field parallel to the *z*-axis, but a field applied to the *x-y* plane has little effect on the polarisation. Consequently the dielectric permittivity is anisotropic. The refractive index, *n*, is given by the square root of the relative dielectric permittivity, i.e. ![](images/equation20.gif). The resulting optical effects are considered in the next section. Optical anisotropy and the optical indicatrix = In transparent materials with anisotropic dielectric permittivity, important optical effects can be observed. Recall that a light wave may be considered in terms of oscillating transverse electric and magnetic fields. Here we concentrate on the effects of the electric field. When discussing optical properties it is important to remember that this field is in a direction lying in the wavefront. It is *not* necessarily perpendicular to the direction of propagation. The interaction between the electric field and the material is governed by the dielectric permittivity discussed in the previous section. A large value of the permittivity gives rise to a large refractive index, and consequently the wave travels relatively slowly. (The refractive index n is related to the velocity of light in the medium, v, and the velocity in a vacuum, c, by n = c/v.) In an anisotropic material the refractive indices can again be illustrated by a representation surface - the *optical indicatrix*. For each (orthogonal) principal direction in the anisotropic material, there is an associated principal refractive index. The variation of the refractive index with the plane of the wavefront can be represented by an ellipsoid. The semi-axes of this optical indicatrix are directly proportional to the principal refractive indices. Optically isotropic materials (e.g. cubic crystals) have one refractive index, with a spherical indicatrix. Crystals with one 3, 4 or 6 fold axis of symmetry have a principal axis of the ellipsoid along this symmetry axis. These *uniaxial* crystals have an indicatrix which is an ellipsoid with a circular cross-section perpendicular to the major symmetry axis – an ellipsoid of revolution. They have two principal refractive indices and one *optic axis* (parallel to the symmetry axis and so perpendicular to the circular section). In general, the electric field of a light wave experiences two *permitted vibration directions*, known as the fast and slow directions, both in the plane of the wavefront, and determined by the shape of the indicatrix. Consider a section passing through the origin of the indicatrix for a uniaxial crystal, and orientated parallel to the wavefronts, as shown by the dotted line in the left-hand figure below. The two permitted vibration directions are given by the major and minor axes of this section. The corresponding refractive indices are the lengths of these axes. The section will be elliptical unless the light is travelling along the optic axis so that the plane of the wavefront coincides with the circular section of the ellipsoid. The observation of two refractive indices for a general orientation of the wavefront is known as *birefringence*. Related effects such as stress-induced birefringence and photoelasticity are discussed in the . | | | | - | - | | Diagram | Diagram | The *ordinary* vibration direction lies in the circular section of the indicatrix (i.e. perpendicular to the optic axis) with refractive index no. Light travelling along the optic axis experiences just this refractive index - the *ordinary refractive index*. The *extraordinary* vibration direction lies in the plane of the wavefront and perpendicular to the ordinary vibration direction, and has refractive index n'e. The value of n'e is determined from the ordinary refractive index and the principal extraordinary refractive index ne, as follows. Consider a cross-section of the indicatrix (as shown in the diagram on the right above), containing the optic axis and the extraordinary vibration direction. The equation for this ellipse will be: \[{{{x^2}} \over {n\_o^2}} + {{{z^2}} \over {n\_e^2}} = 1\] For the point P, x = ne'cosθ, and y = ne'sinθ. Therefore \[{{{{(n{'\_e}\cos \theta )}^2}} \over {n\_o^2}} + {{{{(n{'\_e}\sin \theta )}^2}} \over {n\_e^2}} = 1\] For a general extraordinary wave, the direction in which the light energy travels, the ray direction, is no longer perpendicular to the wavefront. For an explanation see, for example, the link to Optical Birefringence in . As a side note, the relative magnitudes of no and ne determine whether a material is defined as optically positive or negative, the optical sign. ![](images/image22.gif) ### Example: calcite rhomb The birefringence (defined as \(|{n\_0} - {n\_e}|\)) in calcite is so large that two images can easily be observed when viewing an object through a suitable crystal with the naked eye. One of which is due to the ordinary wave (with electric field vibrating parallel to the ordinary vibration direction) and the other is due to the extraordinary wave (with electric field vibrating parallel to the extraordinary vibration direction).The DoITPoMS logo viewed through a calcite rhomb (click on image to view larger version) In calcite, the planar carbonate groups all lie in planes normal to the three-fold axis - the optic axis. The groups are well separated in the direction of the axis. This makes the crystal less polarisable parallel to the axis, so the refractive index for vibrations parallel to the triad axis is smaller than for vibrations perpendicular to it (making the crystal optically negative: ne < no). Your browser does not support the video tag.Video of calcite structure Liquid crystals = When most solids melt, they form an isotropic fluid, whose optical, electrical and magnetic properties do not depend on direction. However, when some materials melt, over a limited temperature range they form a fluid that exhibits anisotropic properties. These materials generally consist of organic molecules that have an elongated shape, with a rigid central region and flexible ends. The molecules in a *liquid crystal* do not necessarily exhibit any positional order, but they do possess a degree of orientational order. The anisotropic behaviour of liquid crystals is caused by the elongated shape of the molecules. The physical properties of the molecules are different when measured parallel or perpendicular to their length, and residual alignment of the rods in the fluid leads to anisotropic bulk properties. This residual alignment occurs as a result of preferential packing arrangements, and also electrostatic interactions between molecules that are most favourable (lowest in energy) in aligned configurations. There are three types of liquid crystal: nematic, smectic and cholesteric. In the liquid crystalline phase, the vector about which the molecules are preferentially oriented, **n**, is known as the "director". The long axes of the molecules will tend to align in this direction. ![Three types of liquid crystal](images/image24.gif) Three types of liquid crystal In addition to the long range orientational order of nematic liquid crystals, smectic liquid crystals also have one dimensional long range positional order, the molecules being arranged into layers. A cholesteric (or twisted nematic) liquid crystal is chiral: the molecules have left or right handedness. When the molecules align in layers, this causes the director orientation to rotate slightly between the layers, eventually bringing the molecules back into the original orientation. The distance required to achieve this is known as the *pitch* of the twisted nematic, as seen in the diagram above. The pitch is not equal to the distance marked x, because only 180º of rotation occurs over this length, so the molecules are aligned antiparallel to their starting orientation. . When viewed between crossed polars, thin films (approximately 10μm thick) of liquid crystals exhibit *schlieren textures*, as seen in the micrograph below, which shows a nematic liquid crystalline polymer.Micrograph of nematic liquid crystalline polymer, courtesy of Professor TW Clyne and the (click on image to view larger version, or ) The black brushes are regions where the director is either parallel or perpendicular to the plane of polarisation of the incident radiation, and the points at which the brushes meet are known as disclinations. If the temperature of a liquid crystal is raised, the constituent molecules have more energy, and are able to move and rotate more, so the liquid crystal becomes less ordered. As a result, the magnitude of the anisotropy of the bulk properties of the liquid crystal decreases, usually eventually resulting in an isotropic fluid. Liquid crystals are used in many different applications, for example the displays on calculators, digital watches and mobile phones. Summary = In some materials a property will be the same, irrespective of the direction in which it is measured, but this is not always the case. On completion of this TLP, you should now understand the concept of anisotropy, and be able to appreciate that a response can be non-parallel to the applied stimulus. Anisotropy in a range of properties has been discussed, including electrical and thermal conductivity, diffusion, dielectric permittivity, and optical properties. You should also now be familiar with the use of representation surfaces for a range of anisotropic properties, including the basis behind their mathematical description. Anisotropic properties are exploited in many applications. In polarised-light microscopy, a quartz wedge can be used to determine birefringence and optical sign. Liquid crystals have electronic uses such as displays, and the liquid crystalline state has advantages in the processing of polymers (such as Kevlar). The anisotropic thermal conductivity in polymer thin films has use in microelectronic devices, for example, solid-state transducers. Anisotropic properties described by higher than second rank tensors (not discussed here) can also have useful applications. Examples include: * Piezoelectricity (relating an applied stress to the induced polarisation) * The electro-optic effect (when a field causes a change in the dielectric impermeability) * Elastic compliance and elastic stiffness (relating stress and strain) * Piezo-optical effect (when a stress induces a change in refractive index) * Electrostriction (strain arising from an electric field) Non-tensor properties can also demonstrate anisotropy; for example, yield stress can vary with direction of applied stress. Questions = 1. Which of these properties of a crystal may be anisotropic? | | | | | | - | - | - | - | | Yes | No | a | Density | | Yes | No | b | Young's modulus | | Yes | No | c | Surface energy | | Yes | No | d | Refractive index | | Yes | No | e | Electrical conductivity | | Yes | No | f | Thermal conductivity | | Yes | No | g | Heat capacity | | Yes | No | h | Melting point | | Yes | No | i | Coefficient of thermal expansion | 2. Two similar transparent uniaxial crystals show the same (principal) extraordinary refractive index, *n*e. However one is optically positive, and the other is optically negative. In which will the light travel faster along the optic axis? 3. Which of these could not induce anisotropy in an initially isotropic material? | | | | | - | - | - | | | a | Application of a stress | | | b | Application of an electric field | | | c | Application of a magnetic field | | | d | Application of a high temperature | 4. Below 0°C a particular material has a crystal structure that gives rise to anisotropic thermal conductivity. At room temperature the thermal conductivity of a sample of this material is found to be isotropic. In what circumstances would the following hypotheses explain this observation? | | | | | - | - | - | | | a | The sample is polycrystalline | | | b | The sample has undergone a phase transition when brought up to room temperature | | | c | The principal values of thermal conductivity have changed with temperature | 5. Which of these materials will show isotropy in its mechanical properties? | | | | | - | - | - | | | a | Wood | | | b | Carbon fibre reinforced polymer | | | c | Window glass | | | d | Extruded polyethylene | 6. An olivine crystal has the following diffusion constants for Ni at a certain temperature: Dx = 6 x 10-8 m2s-1, Dy = 4 x 10-18 m2s-1 Dz = 120 x 10-18 m2s-1. The unit cell dimensions are as follows: a = 0.5 nm b = 1.0 nm c = 0.6 nm By considering the representation surface, what will the diffusion constant be when the concentration gradient lies along the [101] direction? 7. A certain orthorhombic crystal has the following principal values of thermal conductivity: *k*x = 6.25 Wm-1K-1 *k*y = 1.00 Wm-1K-1 *k*z = 1.75 Wm-1K-1 (where the subscripts represent conductivity parallel to the *x*, *y*, and *z* axes respectively). The unit cell dimensions are: a = 0.8 nm, b = 0.6 nm, c = 1.0 nm.Calculate the thermal conductivity along the [111] direction. 8. Explain how anisotropy is involved in the operation of the following devices: 1. a liquid crystal display 2. a fuel cell that uses a sold state electrolyte 3. a pyroelectric intruder alarm 9. Suggest ways of distinguishing between the answers to Question 4. Going further = ### Books * A. Putnis, *Introduction to Mineral Sciences*, CUP, 1992 (specifically Chapter 2, "Anisotropy and physical properties"). * R.E. Newnham, *Structure-Property Relations Relations (Crystal chemistry of non-metallic materials)*, Springer-Verlag, 1975 * D.R. Lovett, *Tensor Properties of Crystals*, IOP, 1999 * P.J. Collings and M. Hird, *Introduction to Liquid Crystals: Chemistry and Physics*, Taylor & Francis, 1997 More advanced and detailed books: * C. Kittel, *Introduction to Solid State Physics*, 7th Edition 1995 * J.F. Nye, *Physical Properties of Crystals: Their Representation by Tensors and Matrices*, Oxford, 2nd Edition 1985 * E. Hecht, *Optics*, Addison-Wesley, 4th Edition 2001 ### Websites * An award-winnning website based at Case Western Reserve University in the USA, with a and . * A TLP covering many features of birefringence under polarised light. * Summary of phase transitions and formation of domains in perovskites. * This comprehensive introduction to optical birefringence is part of the excellent award-winning website based at Florida State University in the USA. * Also part of the website, this is an interactive Java tutorial. For some light relief, take a look at: *
Aims On completion of this TLP, you should: * Be familiar with the concept and mechanism of aqueous corrosion * Know what factors affect the rate of aqueous corrosion * Be familiar with the use of Tafel plots to predict aqueous corrosion rates Before you start This TLP is largely self-contained, though some sections require knowledge of the Pourbaix diagram.  Details of the thermodynamics of aqueous corrosion and the Pourbaix diagram can be found in the TLP entitled . Introduction We are reliant on metallic structures to support our everyday activities, be it getting to work, transporting goods around the world, or storing and preserving food. Metals are everywhere. However, from the moment most metals come into contact with water, they are subject to sustained and continuous attack which can lead to the metal corroding and failing to do its job. It is therefore important to understand when corrosion will occur, how fast it will proceed and what can be done to slow down or stop it.  Whether or not corrosion occurs at all is dependent on thermodynamics and is covered in the .  How to predict and control the rate of corrosion is covered in this TLP. What's Going On? The Mechanism of Aqueous Corrosion = Corrosion involves two separate processes or **half-reactions**, oxidation and reduction.  Oxidation is the reaction that consumes metal atoms when they corrode, releasing electrons.  These electrons are used up in the reduction reaction. When a metal corrodes in solution, the two halves of the reaction can be separated by large distances.  This is unlike oxidation in air, when one reaction occurs at the surface of the film and the other at the surface of the metal, meaning that the reaction sites are always close to each other.  In fact, in aqueous solution the reaction separation can be very large and as long as there is both electronic and electrolytic contact between the anodic and cathodic sites, corrosion will occur regardless of separation of the half-reactions. When a Metal Corrodes - the Electrical Double Layer = An electrical double layer is the name given to any region between two different phases when charge is separated across the interface between them. In aqueous corrosion, this is the region between a corroding metal and the bulk of the aqueous environment (“free solution”).  In the double layer, the water molecules of the solution align themselves with the electric field generated by applying a potential to the metal.  In the Helmholtz model, there is a layer of aligned molecules (or ions), which is one particle thick and then immediately next to that, free solution.  In later models (proposed by Louis Georges Gouy, David Leonard Chapman and Otto Stern) the layer is not well defined, and the orientation becomes gradually less noticeable further from the metal surface. However, for the purposes of determining the rate of corrosion, the Helmholtz model will suffice. To corrode, an ion in the metallic lattice must pass through the double layer and enter free solution.  The double layer presents a potential barrier to the passage of ions and so has an acute effect on corrosion kinetics. Like all chemical processes, the kinetics involved in corrosion obey the Arrhenius relationship:  \[k = {k\_0}\;\exp \left( {\frac{{ - \Delta G}}{{RT}}} \right)\] where k is the rate of reaction, k0 is a fundamental rate constant, ΔG is the activation energy. R and T have their usual meanings of the ideal gas constant (8.3145 J K−1 mol‑1) and temperature (in Kelvin) respectively. The chemical nature of corrosion suggests that it is driven by a change in Gibbs Free Energy, ΔG but the electrical nature of corrosion leads to the conclusion that a voltage drives the reaction.  Since both quantities can be considered as the driving force, they must be equivalent and, indeed they are related through the expression ΔG = −z F E , where z is the stoichiometric number of electrons in the reaction, F is Faradays constant, 96485 C mol−1 and E is the voltage driving the reaction. Note the minus sign, used to correct for the conventions that a chemical reaction only proceeds if ΔG is negative but an electrical reaction only occurs if E is positive. Since the absolute driving force of an applied voltage depends on what reaction is occurring, potentials are usually defined as the difference between applied voltage and the equilibrium potential of the reaction.  The difference between applied potential and equilibrium potential is defined as the ***overpotential***, *η.* η = E − Ee It is worth noting that the “equilibrium potential” is not necessarily the standard electrode potential of the reaction, as this has the added requirement that all reagents are in standard states.  The equilibrium referred to is, in fact the “equilibrium electrode potential”, Ee, which is specific to every electrode individually. If an electrode is at its equilibrium potential, both forwards and backwards reactions occur at the same rate, so no net reaction will occur.  Net reactions only occur when the potential is moved away from equilibrium. The Energy Landscape Under equilibrium conditions, the energy landscape is symmetrical when free energy is plotted against distance from metallic surface: ![Symmetrical energy landscape when free energy is plotted against distance from metallic surface](images/landscape.gif) The fraction of the width of the double layer that must be crossed to reach the excited state is known as the symmetry factor, α. However, when overpotential is applied, the energy is changed on the free solution side of the plot by an amount -*z*F*η*. The overpotential is distributed so that a fraction, α lies across the barrier in the forward direction and (1 − α) lies across the barrier in the backward direction. The overall effect of the overpotential is to **lower** activation energy for the forward reaction by α z Fη.   Thus the Arrhenius relation now becomes: \[{k^′} = {k\_0}\;\exp \left( {\frac{{ - (\Delta {G^0} - \alpha zF\eta )}}{{RT}}} \right)\] \[{k^′} = k\;\exp \left( {\frac{{\alpha zF\eta }}{{RT}}} \right)\] where k is the new rate, k is the rate without the overpotential. ![Graph showing activation energy lowered by overpotential](images/landscape2.gif) Kinetics of Corrosion - the Tafel Equation Tafel equation Armed with the new Arrhenius expression and the generalised reaction: M ⇌ Mz+ + ze, where M is a metal that forms Mz+ ions in solution, we can now derive an equation describing corrosion kinetics. Consider the rate of the anodic (oxidation, corrosion) reaction, ka \[{k\_{\rm{a}}} = k\_{\rm{a}}^{'}\;\exp \left( {\frac{{ - \Delta G^0}}{{RT}}} \right)\] Since the reaction involves the release of electrons, its progress can be expressed as a current density, i (current per unit area). The **exchange** current density, i0 is defined as the current flowing in both directions per unit area when an electrode reaction is at equilibrium (and, hence, at its equilibrium potential). If i0 is small, then little current flows and the reactions at dynamic equilibrium are generally slow.  Likewise, a high i0 gives a fast reaction.  The metal itself affects the value of i0, even if the reaction does not involve the metal directly. \[{i\_0} = z{\rm{F}}{k\_{\rm{a}}} = z{\rm{F}}k\_{\rm{a}}^{'}\;\exp \left( {\frac{{ - \Delta G^0}}{{RT}}} \right)\] If overpotential is applied, the activation energy is changed, as described on the previous page: \[{i\_{\rm{a}}} = i\_{0}\;\exp \left( {\frac{{\alpha z{\rm{F}}\eta }}{{RT}}} \right)\] This is one form of the **Tafel equation**. The Tafel equation can also be written in several equivalent ways, as shown . The quantity \(\frac{{2.303\,RT}}{{\alpha z{\rm{F}}}}\) is given the symbol *b*a and is known as the anodic Tafel slope.  It has units of volts per decade of current.  Similarly, if the cathodic reaction were to be considered, the quantity would be  \(\frac{{ - 2.303\,RT}}{{(1 - \alpha )z{\rm{F}}}}\) since (1 − α) is applicable instead of α and E - Ee is negative.  This quantity is the cathodic Tafel slope, bc.  . The usual form of Tafels equation is η = *a* + *b*a log *i*a where \(a = \frac{{ - 2.303\,RT}}{{\alpha z{\rm{F}}}}\;\log {i\_0}\) Through consideration of the reaction as both a chemical and electrical process and manipulation of algebra, we have found that the applied potential is proportional to the log of the resulting corrosion current.  This is certainly different to Ohmic behaviour where applied potential is directly proportional to the resulting current. The Tafel Plot Using the Tafel equation, useful plots can be drawn to help find corrosion rates. In a plot of i vs. E, for a single electrode, the following is seen: ![Tafel plot of i vs. E,](images/reactions2.gif) Considering the sum of the currents and **then** ignoring the signs and **then** taking log of current gives a plot known as a Tafel plot, which is described in the animation below for a single electrode: As can be seen, at Ee, the net current flow is 0, as must be the case for equilibrium (the anodic and cathodic currents are equal and opposite).  The straight-line sections have gradients related to the Tafel slopes – anodic, ba and cathodic bc.  (If we had plotted E on the vertical axis and log i horizontally the gradients would be equal to ba and bc. ) There are several important points to note: * *i*a and *i*c never reach zero individually.  However, the resultant net current flow will be zero if anodic and cathodic currents are equal in magnitude. * This derivation applies both to dissolution (corrosion) of metals and deposition (electroplating) of metals. * This derivation also applies to hydrogen evolution and oxygen reduction, even though they dont involve metal ions – they can still **activation controlled**. * The signs may be dropped since they serve only to define direction of current flow.  Had current been defined the opposite way around, all signs would be reversed, so dropping signs (to allow logs to be taken) is not unreasonable. * α is usually 0.5 for a single step reaction. * Multiple step reactions can have different steps with different stoichiometric numbers of electrons (different z).  In this case, the value of z for the rate determining step should be used, **not the overall stoichiometric number**. * However, the overall stoichiometric value of z **is** used to relate current density to rate constant and in the Nernst Equation.  If these are different, it is standard to rename the value for the RDS (the value inside the exponential) as n. N.B. There is no universally adopted standard to plot log (i) on the y-axis and E on the x-axis. In fact, it is more common to see polarisation curves plotted as E vs. log (i).  In this TLP graphs will be plotted as log (i) vs. E. Tafel plots can be linked to Pourbaix diagrams:Corrosion occurs when two electrodes with different equilibrium potentials are in both electronic and electrolytic contact. We can use Tafel plots to predict corrosion rates as explained in the animation below. Diffusion Limited Corrosion = So far all reactions have been assumed to proceed (if they are thermodynamically possible) at the rate predicted by the Tafel analysis. In reality, reactions are often limited by other factors and dont achieve this maximum rate.  One such factor is the availability of oxygen in solution. In aqueous solutions that contain dissolved oxygen, an important cathodic reaction is the oxygen reduction reaction: O2 + 4 H+ + 4 e- → 2 H2O The reaction takes place at the surface of the metal and so oxygen must be present at that site.  If the reaction occurs quickly enough, the concentration of oxygen at the surface cannot be maintained at the same level as that in the bulk of the solution. In this case the rate of oxygen diffusion may become a limiting factor.  With less oxygen available, the cathodic reaction slows down and so must the anodic reaction to conserve electrons (electrons can only be used up at the same rate as they are released as charge must always be conserved). \* can be used to find the maximum rate of oxygen diffusion.  Since each oxygen molecule consumes 4 electrons, according to the reaction above, this maximum rate of diffusion corresponds to a maximum current density that the oxygen reduction reaction can sustain and, hence, a maximum corrosion rate for the anode (since electrons must be used at the cathode at the same rate as they are released at the anode). Since the corrosion current is limited, the cathodic arm of the Tafel plot is flattened: Oxygen reduction is not the only process that deviates from the Tafel analysis.  The hydrogen evolution reaction can be limited by the rate at which molecules desorb from the cathode surface.  This is usually the rate-determining factor for hydrogen evolution on iron, copper, platinum and other metals.  Relatively few metals behave as predicted by the Tafel analysis, examples being cadmium, mercury and lead. Passivation = Another effect that limits the rate of corrosion is **passivation**.  If the potential of an electrode is raised above some **passivation potential**, a passive product may become favourable forming a layer on the surface of the anode.  In this case, the rate of corrosion can be much reduced.  This is characterised by the value of log (i) peaking at a **critical current density**, before falling to some lower value.  In other words, the anodic arm of the Tafel plot reaches a peak and falls away to a roughly horizontal region: ![Graph of passivation](images/anatomy of polarisation curve8_8.gif) It is possible to deliberately drive the reaction into the regime in which a passive layer forms.  This technique is used in the process known as **anodising** in which thick oxide layers are developed on aluminium components. The Tafel plot below shows two electrodes. The cathodic branch of the electrode with the higher equilibrium potential (shown in blue) is diffusion limited. The anodic branch of the electrode with the lower equilibrium potential (shown in red) is passivated.  The resulting intersection is often a good representation of real corrosion scenarios, where the metal can passivate and there is a limited oxygen supply for the cathode. Predicting Corrosion Rates Armed with the Tafel equation and Tafel plots, it is now possible to predict whether a particular setup will result in corrosion and if so how fast the corrosion will be. In order for corrosion to occur, there must be a suitable anodic reaction and an appropriate cathodic reaction.  This is manifested as an intersection of a cathodic branch and an anodic branch on a Tafel plot.  The point of intersection gives the corrosion potential and the corrosion current (or, more accurately the log of the corrosion current density). The rate of corrosion is governed by all the factors discussed previously.  When all the effects are taken into account, Tafel plots get quite complicated and some interesting effects occur: Faradays law allows the current density to be expressed as the mass of material lost per unit time. The calculation involves a few simple steps.  For a corrosion reaction: 1. The current is converted into a rate of electron consumption using the electronic charge constant. 2. The number of electrons is divided by the stoichiometric number of electrons in the corrosion reaction, giving the number of metal atoms lost per unit time. 3. This answer is then divided by Avogadros number to give the number of moles of metal atoms lost per unit time. 4. The number of moles is then converted to mass lost per unit time, using the molar mass. 5. The mass is then converted to a volume using the density. 6. The volume is then converted to a thickness lost per unit time by dividing by the area that the current passes over.  If a current density was given, this step has already been done. Overall, the thickness of metal lost per unit time is given by the formula: $$t = {{i{m\_{\rm{M}}}} \over {\rho ez{N\_{\rm{A}}}}}$$ where t = thickness (m), i = current density (A m-2), mM = molar mass (kg mol-1), e = electronic charge (C), z = stoichiometric number of electrons in oxidation reaction, NA is Avogadros number. It is also possible to have a situation where corrosion does not occur for thermodynamic reasons, for example if there was a driving force for the reverse of the corrosion reaction to occur due to an applied potential.  This would result in deposition (electroplating) if there were metal ions in solution available to be reduced.  If deposition is being carried out commercially, for example to electroplate silver onto stainless steel cutlery, the rate must be maximised to make production as cost effective as possible.  However, care must be taken to avoid the hydrogen evolution reaction starting at the cathode in addition to the metal ion deposition. ![Tafel plots for hydrogen and copper](images/electroplate2.gif) We can now draw Tafel plots and use them to determine corrosion current densities and corrosion rates.  Below is an interactive graph that allows the corrosion rates of several metals to be investigated. Notice how, in this idealised situation, i.e. with no diffusion limits, the corrosion rate in aerated water can be extremely high.  This shows how important a consideration the diffusion layer is. Corrosion Control = There are several ways kinetics can be employed to reduce or prevent corrosion. Barriers and coatings - A barrier can be employed to prevent the electrolyte coming into contact with the metal.  Tin is the usual barrier, as it does not react in most aqueous solutions.  It is used in food cans and has the useful property that if the barrier fails, then the steel in the can corrodes.  If zinc were used instead it would begin to act as a sacrificial anode should the barrier fail.  This would protect the steel but the distinct disadvantage is that hydrogen is evolved at the steel cathode and if it were to build up inside a closed container an explosive situation could arise. Other inert metals may be used as barriers, as can polymers, ceramics and paint. Anodic protection - Anodic protection involves raising the potential of the metal in order to develop a passivating layer (such that protection is due to inhibited kinetics). Sodium carbonate, a base, acts to remove the acidity in the solution and drives the reaction towards the right of the Pourbaix diagram.  Above a certain pH, metals tend to form a passive layer as a passive species is stable under these conditions. Potassium chromate works by providing a source of chromate ions that penetrate the surface of the metal, forming a stable, passive chromium oxide layer on the surface of the anode and thus prevents corrosion by forming a passivation layer in a similar way to that seen in stainless steels. ![Tafel plots showing protection of metal by chromate](images/protect2a.gif) Cathodic protection - Cathodic protection involves lowering the potential of the metal in order to make it more thermodynamically stable. This may be done using an impressed current (a supply of electrons from an external source) or with a sacrificial anode, as shown below: ![Tafel plots showing protection of metal by a sacrificial anode](images/protect2b.gif) Summary = Corrosion is a problem facing us every day and in almost every activity.  Corrosion wastes material and energy, and could prevent objects from doing the job they were made to do, possibly with dangerous consequences. The rate at which corrosion occurs depends on the kinetics of the reactions taking place and so the electrical double layer is important. Applying an overpotential to an electrode drives the reaction in one direction and away from equilibrium.  Tafels law governs the new rate and as long as the reaction kinetics are activation controlled, the overpotential is proportional to the log of the corrosion current. Other factors may limit the maximum rate of corrosion, with oxygen depletion limiting the speed of the cathodic reaction to the rate at which oxygen can be supplied from the bulk.  The anodic reaction may be limited by passivation, if a sufficiently large overpotential is applied to form a passive layer.  Passive layers separate the metal from the electrolyte and slow the corrosion reaction. Faradays law can give meaningful results from the predicted corrosion current, i.e. giving the mass loss per unit time. Corrosion can be slowed by either adding an inhibitor to remove hydrogen ions and move to a passivating region of the Pourbaix diagram, by adding an inhibitor to form a passive layer on the anode, or by adding an inert barrier to the surface of the anode. Questions = ### Quick questions*You should be able to answer these questions without too much difficulty after studying this TLP. If not, then you should go through it again!*1. Which of the following half-reactions represent a general corrosion process? | | | | | - | - | - | | | a | M + 2e → M2+ and H2 → 2H+ + 2e | | | b | M → M2+ + 2e and H2 → 2H+ + 2e | | | c | M2+ + 2e → M and H2 → 2H+ + 2e | | | d | M → M2+ + 2e and 2H+ + 2e → H2 | 2. Which of the following are possible cathodic reactions that accompany corrosion | | | | | - | - | - | | | a | 2H+ + 2e → H2 | | | b | O2 + 4H+ + 4e → 2H2O | | | c | 4OH- → O2 + 2H2O + 4e | | | d | Mnz+ + ze  → M | 3. Which of the following is **not** a form of Tafel's equation? | | | | | - | - | - | | | a | | | | b | | | | c | | | | d | | 4. Using the electrochemical series, which of the following metals can be used as a sacrificial anode for steel under standard conditions? Click on this link to | | | | | - | - | - | | | a | Nickel | | | b | Zinc | | | c | Magnesium | | | d | Tin | 5. Look at the following Tafel plot. What is the critical current density? ![](images/question5.gif) | | | | | - | - | - | | | a | 15 μA m-2 | | | b | 1.5 A m-2 | | | c | 32 A m-2 | | | d | 0.3 μA m-2 |### Deeper questions*The following questions require some thought and reaching the answer may require you to think beyond the contents of this TLP.*6. Draw the Tafel plot of the following information on graph paper and find: a) The corrosion potential b) The corrosion current density c) How long would a 3 mm thick component survive in this scenario (use Faraday's law) Both reactions have ba = -bc = 0.12 V / decade. One half-reaction has an equilibrium potential -0.25 V (SHE) and an exchange current density of 10 μA m-2. This reaction has a passivation potential 0.15 V (SHE) and passive current density 10 mA m-2. The other half-reaction has equilibrium potential 0.8 V (SHE) and exchange current density 0.1 mA m-2. The metal corroding forms 2+ ions, has a molar mass of 35 g mol-1 and has a density of 6400 kg m-3. 7. a) Write balanced half-reactions and the overall reaction for an iron water pipe corroding in fully aerated water under standard conditions, and with the water flowing at such a rate to maintain a diffusion layer 1 μm thick. b) Derive the anodic and cathodic Tafel slopes for the two halves of the reaction if α = 0.5 Using the fact that the iron does not passivate in the potential range being considered, some of the data in and the other information and answers above, draw the Tafel plot on graph paper, and then calculate c) The corrosion potential d) The corrosion current density e) Whether or not the pipe will survive 1 year if it has walls 5 mm thick f) What happens if the pipe is part of a sealed system such as central heating? g) Why else might your answer to c) be flawed? Going further = ### Books J M West, *Basic oxidation and corrosion*, Ellis Harwood (1936) K R Trethewey and J Chamberlain, *Corrosion for students of science and engineering*, Longman (1988) ### Websites A really good website with definitions and explanations Another website that covers most things in this TLP. A bit more detailed but needs more prior knowledge
Aims On completion of this TLP you should: * be able to identify the materials used in common objects using suitable techniques; * be able to identify the processes used to produce a particular shape or microstructure within components of an article; * be able to understand why materials and processes might be chosen to produce a given article. Before you start This TLP brings together considerations from a range of other topics, and so many other TLPs might be helpful in understanding the properties and techniques referred to.  There will be links to the relevant TLPs at appropriate points. Introduction Most manufactured items contain a large number of individual components, often using a surprisingly wide variety of materials.  Examining a manufactured item (article) can help to understand what materials are commonly used in manufacturing and why.  It is important to keep in mind economic factors as well as the properties of materials and the specific purpose of each component.  This means it can be quite a complex procedure to identify, with confidence, the materials within an article.  It is usually done by considering a range of all appropriate factors and making use of some of a broad range of identification techniques. This TLP suggests some factors that may be considered in attempting to identify the materials used in various articles.  It will also give some examples of where these factors may be important. The other DoITPoMS TLPs provide a wide variety of information about the functional and mechanical behaviour of materials, as well as a range of information on techniques for examining materials.  This TLP will not aim to describe all of these, but will link to them. Dismantling the Article = The first step to determining the materials and reasons for their use in an article is to take it apart.  This is an important stage because it is a chance to see where components fit, and what their purposes might be. An exploded diagram of components and where they fit in relation to the rest of the article is often extremely helpful.  Once the article has been dismantled it may not be possible to reassemble it, and knowing the position of a component in the article is vital to understanding what its function might be.  The exploded diagram helps to sort out what the purpose of components are, and therefore what might be required of their properties. The diagram should be clear and well labelled, including scale bars.  Important properties to consider for each component could include: * **Mechanical properties:** such as strength or toughness * **Electrical properties**: conducting or insulating * **Aesthetic properties**: is the component on display, and would the appearance be important? * **Corrosion resistance**: the component will probably need to last for at least the lifetime of the product * **Density of component**: it may be important that the article is lightweight * **Specific functions**: some materials are chosen because they fulfil a specific function within the article (for example or materials) It is also important to consider economic factors, such as the costs of the raw materials and processing.  The cost of a material can influence choice as much as its physical properties; for example diamond is an extremely hard material, but it is not widely used due to cost. Each component will probably be reasonably easily identifiable as belonging to one of four classes of materials: * Metals * Polymers * Ceramics * Composites Classes of Materials It can often be quite straightforward to tell materials classes apart by look or feel.  Metals are usually more reflective or 'metallic' looking, ceramics are commonly matte and polymers may be shiny or matte, but are typically less dense than either metals or ceramics.  Composites may be harder to immediately identify, but the surface may appear non-uniform and/or sectioning the sample may reveal fibres or particles. It is useful to note that taking a cross-section can often be helpful in identifying materials used in components as the internal material and/or microstructure may differ from that at the edge. Some materials may not be quite so easily identified simply from appearance and texture, but considering the factors mentioned can help to narrow down the possibilities.  The table below gives a rough guideline to the kinds of properties you would expect from each class, and a few examples: | | | | | - | - | - | | **Class** | **Common Properties** | **Examples** | | Metal | Hard, ductile and conduct heat and electricity | Copper (wires), stainless steel (cutlery) | | Polymer | Widely variable, often soft and flexible | Polystyrene (cups), polycarbonate (CDs), polyethylene (plastic bags) | | Ceramic | Hard, brittle, resistant to corrosion, electrically non-conductive | Concrete (buildings), PZT (piezoelectric used in lighters and ultrasonic transducers), porcelain (vases, teacups) | The tree-diagram below shows an overview of a variety of materials that might be encountered: ![Materials tree diagram](images/tree.gif) Coatings Many components may have some kind of coating; a covering of another material designed to improve the surface qualities of the item.  The improvement could be for many reasons including: corrosion resistance, appearance, adhesion, wear resistance and scratch resistance. Different kinds of coating will have different processing methods.  It is often possible to deduce the method from the composition (of the coating and the bulk of the component) and the shape of the component.  **Common coating methods include:** ***Hot dip coating*** – a method used for coating metals (commonly ferrous alloys) with a low melting point alloy.  The component is dipped in a bath of the molten coating alloy. For example zinc is often hot dipped onto steel (called ‘galvanising).  This also offers sacrificial corrosion protection and gives a distinctive ‘spangled appearance (which can be prevented by including particles to encourage nucleation in the electrolyte). ![Image of surface of steel coated with zinc](images/galvanised2.jpg) Steel coated with zinc ***Electroplating*** – reduction of cations in an electrolytic solution onto conducting components. For example silver plated cutlery. ***Anodising*** – commonly used for aluminium components, an electrochemical cell is set up which drives the oxidation of the metal, increasing the thickness of the protective oxide layer.  ***Vacuum deposition*** – also known as PVD – physical vapour deposition.  For example, ‘evaporation involves the heating of the coating metal in a vacuum, so that it evaporates and is deposited onto the surface of the component that is positioned above.  This process is used to make mirrors, depositing a thin layer of metal, usually aluminium. ***Thermal Spraying*** - powder particles are fed into a high temperature torch (combustion or plasma), where they melt and are accelerated against the substrate. It is mainly used to produce relatively thick ceramic and metallic layers. ***Enamelling*** - a powder is distributed on a surface, which is then heated so that the powder melts and bonds to the substrate. The resultant layer is usually glassy. Originally developed in ancient Egypt, and extensively used for jewelry, it is also widely employed for cooking utensils and various domestic items, especially those subjected to high temperature. One example of a coated metal is shown below; a drawing pin, which appears to be brass, was found to be magnetic and so the surface was abraded, revealing a grey metal within – steel.  Steel, which has very good mechanical properties, is covered with brass for aesthetic reasons and also protects the surface from corrosion. | | | | - | - | | | | | Magnetic drawing pin | Abraded pin revealing steel | Metals Metals are extremely widely used in manufacturing, often for mechanical or electrical properties. They are often easily shaped and have good mechanical properties so they may be used in ‘structural elements of an article, and they also have good conduction (electrical and thermal) and so may also be used, for example, in electrical wiring. Familiar metals can often be reasonably well identified by eye (see examples below), but there are more complex methods of metal identification available too. ![Image of copper and brass](images/metalseg.jpg) Techniques for the Identification of Metals - One very easy test for a metal is to see if the component is magnetic, this narrows down the possible materials to those that are ferromagnetic (most commonly iron or nickel). Simple corrosion tests involving immersing a scratched sample (to remove any coating) in water (or some other electrolyte) can be helpful. Leaving a sample in water overnight might reveal rusting. For example a scratched zinc coated steel sample would not rust due to the zinc offering sacrificial protection. However, a scratched tin coated steel sample would rust, because the tin is supposed to act as a barrier between the steel and air.This is especially useful for ferrous alloys as corrosion resistance is very often a concern for these (and they are very common). **Optical Microscopy** This involves looking at mounted, polished and etched samples under a light microscope. It reveals the microstructure of the sample; this can give information on both the composition and processing of the component. See TLP for more information on how to go about this. | | | | - | - | | Image showing Al-Cu Eutectic composition | Image of cold rolled zinc showing deformation twins | | Al-Cu Eutectic composition This is an Al-Cu alloy showing a very clear eutectic lamellar microstructure. (See for more information) | Cold rolled zinc showing deformation twins This is zinc, it has been cold rolled as can be seen from lenticular deformation twins | See the for further examples. The benefits of this method are that optical micrographs can reveal a large amount of information about a metallographic sample, and it is possible to find known examples (see above links) to compare your work to. After an initial examination by eye and consideration of properties, optical microscopy is an important step in the characterisation of metals. It can reveal many things that the initial examination does not. Scanning Electron Microscopy (SEM) Scanning Electron Microscopy uses a focussed beam of high-energy electrons to form images of samples. Electron Microscopy is not limited by the wavelength of light, so very closely spaced features can be resolved, so this method gives very clear high magnification images when set up correctly. It also gives a large depth of field, so rough surfaces can still be in focus. The SEM can give very good high magnification images, again revealing more than optical microscopy could. One limitation is that the sample must be electrically conducting; the mounting polymer and the sample must both conduct electricity.**Energy Dispersive X-ray Spectroscopy (EDS)** This is a technique often used in conjunction with the SEM, with an electron beam of ~20 keV. The beam strikes the sample resulting in X-rays being emitted; the X-rays are collected and the intensities and energies examined. The results can determine the atomic composition of the sample at the point of beam-sample interaction. Examples: It can reveal the composition of a very thin coating; for example by taking a linescan. The diagram below shows results for a linescan taken across a coating, showing a layer of copper and a finer layer of nickel on an iron alloy (all other elements were ignored). This may not be easy to tell from optical microscopy alone. ![Linescan across a nickel/copper coating on iron alloy](images/linescan.gif) EDS can be a very helpful method of characterisation, but it is not always absolutely reliable. Elements of low atomic number (less than about 11, i.e. below sodium) are difficult to detect by EDS. There is often contamination of elements like carbon from the environment. It is important to use common sense when interpreting the EDS results; most of the time is it unlikely that very heavy elements like uranium are actually present. Results may be very good qualitatively, but care must be taken when trying to obtain and interpret quantitative results. The quantitative analysis results depend upon things like the set-up of the SEM and the geometry of the sample. Examples of Fabrication Processes - The ease with which metals are shaped leads to a wide range of processing techniques for different end products; any coatings will also be applied by one from a range of processes. It may be possible to deduce the method of processing from the shape and the properties of the metal. The microstructure may also give further clues: | | | | | - | - | - | | **Process** | **Description** | **Features** | | Deformation | Includes a variety of techniques including forging, extruding and drawing, see the TLP for more information | May expect to see directionality or squashed grains in places that have been stressed (see the animation) | | Machining | Includes, for example, laser cutting and water-jet cutting as well as more conventional methods like sawing or grinding. | These methods would not result in larger scale microstructural directionality, but may show localised deformation. Can give a very good finish | | Casting | The molten metal is set in a cast of some kind, see the TLP for further details on the different kinds of casting. | There is a lot of variety of microstructure from cast products, which ranges from single crystal components to those with clear chill, columnar and equiaxed zones. | | Carburisation – a surface heat treatment | A surface treatment in which carbon is diffused into the surface of a steel object above the ferrite-austenite transition temperature (~ 740 °C). This is done by heating the steel in a C-rich atmosphere, (e.g. in CO gas). The result is a hard, high carbon surface several hundreds of microns thick, surrounding a tough, low carbon interior. To improve the hardness, the surface may be quenched, which helps the production of martensite. | See micrograph below | ![Micrograph number 271 from micrograph library](images/271s.jpg) Micrograph #271 An example of carburisation; see the for further information. Polymers Polymers are very widely used in many areas today. They have a range of properties that can often be controlled by additives, blending or copolymerisation. Many structures and chemical compositions are seen in polymers, but they can be separated into three main groups: **Thermoplastics:** These are the most widely used polymers due to the ease of processing (especially for injection molding). Thermoplastics can, once they have been set (solidified) for the first time, be re-melted and remoulded (unlike thermosets). Some examples of thermoplastics are: polyethylene, polystyrene and PET. **Thermosets:** These differ from thermoplastics in that they do not re-melt after they have been set (or cured). This is due to the long polymer chains forming cross links on curing. One example is melamine formaldehyde, which is used in domestic electrical plugs. **Elastomers:** These polymers have a glass transition temperature below room temperature (see TLP). Rubbers are examples of commonly used elastomers (for more information see the TLP) Techniques for identifying polymers - ### **Polymer tests** The polymer tests are a simple way to identify polymers, or at least to narrow down the possibilities. Some of the steps rely on recognising smells, which can be difficult, and it is important to remember that some tests (for example transparency) can be unhelpful due to additives like dyes. The goes through a series of simple tests, which should be carried out on a small sample of the polymer. Below is an interactive version of the identification chart. It is important to connect the results of the test to the function and cost of the item. ### Infra Red (IR) In an IR spectrometer IR radiation excites covalent bonds, causing them to vibrate at their resonant frequency. This frequency depends on the exact nature of the bonds (e.g. single/double and the atomic masses of the elements involved). The output is a graph of intensities at different wavelengths (and therefore energies) of infrared radiation. This plot shows the transmitted intensities, so at resonant frequencies, where the energy is absorbed, there is a peak. This allows the bonds to be identified and therefore the sample identified. Here is a collection of IR spectra for some common polymers: IR spectrometry is often a very quick method of polymer identification (depending on the equipment available). Preparing a sample for IR spectroscopy may be very simple. A small sample of the polymer with any coatings removed should be placed in the IR machine, and analysed. If it is likely that the plastic contains plasticizers and colours, placing the polymer in ether for an hour and then fully drying it may remove them prior to carrying out IR spectroscopy on the sample. (Test this with a small piece of your sample polymer first though, as some polymers are soluble in ether). ### Differential Scanning Calorimetry (DSC) DSC measures specific heat capacity and how it varies with temperature. As a polymer is heated through its glass transition point, it experiences a sudden change in heat capacity, as chain rotation allows it to take up more energy. This means that DSC allows us to identify the glass transition temperature of a polymer. It can also aid the interpretation of the type of a copolymer (e.g. block copolymer, random copolymer, graft copolymer). See for an explanation of this technique. Examples of processes - Polymers are usually processed by moulding methods: | | | | | - | - | - | | Process | Description | Features | | Injection moulding | Polymer granules are melted and forced into a mould. This is extremely widely used to mass produce small, precise polymer components. | It gives a good finish and the injection points where excess material has been cut off are often visible. In transparent polymers a residual stress field may be visible under crossed polars (see the TLP). | | Blow moulding | Cylinders of polymer are inserted into a die and hot air is forced in, pushing the polymer out to the walls of the die. | Gives hollow components, such as bottles or containers. It is only used for thermoplastics. | For further examples see on . ### Additives and Blends: Polymers very often have some form of additive, even if it is simply to add colour. These may or may not impair the ability to identify the polymer. When identifying any material it is important to think about the cost and properties, but blending or additives can change the properties of a polymer. One very common example of a polymer commonly found both with and without additives is polyvinylchloride (PVC). This polymer is used in its rigid, un-plasticized form in plastic guttering and water and gas piping, but is also often found with added plasticizers in a variety of applications from clothing to coating electrical wires. Ceramics and Composites = Ceramics Ceramics cover a very wide range of materials from structural materials like concrete to technical ceramics like PZT – a .  Usually they are defined as solids with a mixture of metallic or semi-metallic and non-metallic elements (often, although not always, oxygen), that are quite hard, non-conducting and corrosion-resistant. **Techniques for identifying ceramics** It is effectively impossible to identify ceramics by eye. Optical microscopy will allows the examination of the microstructure to identify the method of processing, however, it does not allow the identification of different phases. The most useful technique for finding the composition of a ceramic is energy dispersive x-ray spectroscopy (EDS).  Note that for non-conducting ceramics the surface of the sample must be covered with a metallic coating (often gold) to prevent charge build-up. Here is an example EDS for PZT – a piezoelectric ceramic: Pb[Zr*x*Ti1-*x*]O3, this data gives the formula to be: Pb0.7[Zr0.49Ti0.44]O3.  For the piezoelectric ceramic we would expect to have *x* ~ 0.52. | | | | - | - | | **EDS data for PZT** | | | **Element** | **Weight%** | **Atomic%** | | O K | 17.26 | 63.49 | | Ti K | 7.58 | 9.32 | | Zr L | 16.15 | 10.42 | | Pb M | 59.01 | 16.77 | | Totals | 100.00 | 100.00 | Another appropriate method is . This allows you to detect the phase or phases present as well as measuring lattice parameter(s) in order to specify precise compositions. **Processing techniques for ceramics** Ceramics are mostly made by powder processing techniques, for example sintering. It may be possible to identify the kind of processing from directionality or porosity in the sample. Composites Composites are often used in applications that require specific ‘conflicting properties such as a high strength and high toughness. The properties may be conflicting because having a high yield stress sometimes relies on trapping and tangling dislocations, but these reduce the ductility and toughness of the material.  Composites often consist of a matrix and fibres or particles that affect the properties (see the TLP on the ). Usually for composites, once they have been identified as such, it is better to treat each part of the composite as a separate material, and then subsequently look at costs of manufacture and processing. One important distinction to make is the structure of the two parts that make up the composite – i.e. is it a matrix with long, aligned fibres? Or a matrix with particles? etc Example Article = The best way to understand the concepts in this TLP is to try analysing something. Here is a 'virtual' article, which can be clicked through: Summary = Many articles can be analysed using the relatively simple techniques described here.  This can help with determining the types of material used for different components, their composition, and also processing history.  The examination of an article can start to put into use the methods and theory of materials science.  Looking at the mechanical, thermal and aesthetic properties of materials can help materials scientists design similar items. The range of techniques available today is very large, but often a reasonable amount of understanding can be gained from fairly simple techniques and using some common sense.  It is essential not to forget the importance of stepping back from results and considering whether or not they are logical; do they fulfil the requirements in terms of mechanical, thermal, aesthetic and economic properties? Questions = ### Quick questions*You should be able to answer these questions without too much difficulty after studying this TLP. If not, then you should go through it again!*1. Which of these methods of characterisation would be helpful in identifying a ceramic? | | | | | - | - | - | | | a | Infra red spectroscopy | | | b | Differential Scanning Calorimetry | | | c | Energy Dispersive X-ray Spectroscopy | | | d | Scanning Electron Microscopy |### Deeper questions*The following questions require some thought and reaching the answer may require you to think beyond the contents of this TLP.*2. Match the material to the properties, try to think of reasons why the material may have been chosen. Material 1 has a shiny 'spangled' appearance due to large dendritic grains on the surface, it is a structural component within the article and must withstand relatively high stresses, and may experience wear in service, it must not corrode in warm air and is in a low cost article. | | | | | - | - | - | | | a | Tinned Steel | | | b | Stainless Steel | | | c | Aluminium | | | d | Galvanised Steel | 3. Match the material to the properties, try to think of reasons why the material may have been chosen. Material 2 is a brightly coloured, low-density component, it must be tough, rigid and non-toxic, the recycling mark is number 7. | | | | | - | - | - | | | a | Polyethylene (PE) | | | b | Acrylonitrile-butadiene-styrene (ABS) | | | c | Polyethylene terephthalate (PET) | | | d | Polytetrafluoroethylene (PTFE) | 4. Match the material to the properties, try to think of reasons why the material may have been chosen. Material 3 is an electrical component, it forms a contact that is subjected to a reasonable amount of wear, it must not corrode in air. | | | | | - | - | - | | | a | Low carbon steel | | | b | Copper | | | c | Brass: Copper/Zinc alloy | | | d | Stainless steel | Going further = ### Websites Information on selection, design and processingSome simple ways to identify metals Some simple ways to identify polymers This TLP covers the fundamentals of metal forming processesMacrogalleria's explanation of using DSC for sudying thermal transitions of polymers "Probably" the Web's largest plastics encyclopedia, including plastics processes Metallography advice, including sample preparation methods and choice of standard etchants 
Aims On completion of this TLP you should: * know the differences between single crystal, polycrystalline and amorphous solids * be able to identify the characteristic features of single crystals and polycrystals * understand the nature of crystal defects * appreciate the use of polarised light to examine optical properties Introduction The fundamental difference between single crystal, polycrystalline and amorphous solids is the length scale over which the atoms are related to one another by translational symmetry ('periodicity' or 'long-range order'). Single crystals have infinite periodicity, polycrystals have local periodicity, and amorphous solids (and liquids) have no long-range order. * An *ideal single crystal* has an atomic structure that repeats periodically across its whole volume. Even at infinite length scales, each atom is related to every other equivalent atom in the structure by translational symmetry. * A *polycrystalline solid* or *polycrystal* is comprised of many individual *grains* or *crystallites*. Each grain can be thought of as a single crystal, within which the atomic structure has long-range order. In an *isotropic* polycrystalline solid, there is *no relationship* between neighbouring grains. Therefore, on a large enough length scale, there is no periodicity across a polycrystalline sample. * *Amorphous* materials, like window glass, have no long-range order at all, so they have no translational symmetry. The structure of an amorphous solid (and indeed a liquid) is not truly random - the distances between atoms in the structure are well defined and similar to those in the crystal. This is why liquids and crystals have similar densities - both have *short-range order* that fixes the distances between atoms, but only crystals have long-range order. ![Diagram showing the range of translational periodicity in materials](images/length_scale.gif) The range of crystalline order distinguishes single crystals, polycrystals and amorphous solids. The figure shows how the *periodicity* of the atomic structure of each type of material compares. Many characteristic properties of materials, such as mechanical, optical, magnetic and electronic behaviour, can be attributed to the difference in structure between these three classes of solid. Single crystals: Shape and anisotropy = A single crystal often has distinctive plane faces and some symmetry. The actual shape of the crystal will be determined by the availability of crystallising material, and by interference with other crystals, but the angles between the faces will be characteristic of the material and will define an ideal shape. Single crystals showing these characteristic shapes can be grown from salt solutions such as alum and copper sulphate. Gemstones are often single crystals. They tend to be cut artificially to obtain aesthetically pleasing refractive and reflective properties. This generally requires cutting along crystallographic planes. This is known as cleaving the crystal. A familiar example is diamond, from which decorative stones can be cleaved in different ways to produce a wide range of effects. To see a variety of symmetrical naturally formed minerals, visit the website. Consider the following three-dimensional shapes: | | | | - | - | | Diagram of a cube | Cube: 6 identical squares | | Diagram of a tetrahedron | Tetrahedron: 4 identical equilateral triangles | | Diagram of an octahedron | Octahedron: 8 identical equilateral triangles | | Diagram of a rhomohedron | Rhombohedron: 6 identical parallelograms with sides of equal length | You can make your own cube, octahedron and tetrahedron by printing the following pages and following the instructions on them. * * These three shapes are the most important in materials science, and you should be very familiar with them! The symmetry exhibited by real single crystals is determined by the crystal structure of the material. Many have shapes composed of less regular polyhedra, such as prisms and pyramids. | | | | - | - | | Diagram of a hexagonal prism | Hexagonal prism: 2 hexagons and 6 rectangles | | Diagram of a square-based pyramid | Square-based pyramid: 4 triangles and a square | Not all single crystal specimens exhibit distinctive polyhedral shapes. Metals, for example, often have crystals of no particular shape at all. | | | | | - | - | - | | | | | These quartz specimens show a range of shapes typically exhibited by crystals. (Click on an image to see a larger version.) Most single crystals show anisotropy in certain properties, such as optical and mechanical properties. An amorphous substance, such as window glass, tends to be isotropic. This difference may make it possible to distinguish between a glass and a crystal. The characteristic shape of some single crystals is a clue that the properties of the material might be directionally dependent. The properties of polycrystalline samples can be completely isotropic or strongly anisotropic depending on the nature of the material and the way in which it was formed. Single crystals: Mechanical properties Gypsum can be cleaved along particular crystallographic planes using a razor blade. The bonding perpendicular to these cleavage planes is weaker than that in other directions, and hence the crystal breaks preferentially along these planes. Quartz and diamond do not have such distinct cleavage planes, and so cleaving these crystals requires much more effort and care. There are distinct planes in the gypsum structure, with no bonding between them. These are the cleavage planes. It is much more difficult to cleave gypsum along planes other than these. In contrast, all of the planes in the quartz structure are interconnected and the material is much more difficult to cleave in any direction. This is a demonstration of a way in which the crystal structure of a material can influence its mechanical properties.Certain crystals, such as gypsum, can be cleaved with a razor blade along particular crystallographically-determined planes. (Click on image to view larger version.) Glass is impossible to cleave. As an amorphous substance, glass has no crystallographic planes and therefore can have no easy-cleavage directions. Glassy materials are often found to be mechanically harder than their crystalline equivalents. This is an example of how mechanical properties of crystals and amorphous substances differ. Single crystals: Optical properties = Quartz crystals are birefringent, so they exhibit optical anisotropy. Consider plane polarised light passing through a birefringent crystal. Inside the crystal, the light is split into two rays travelling along *permitted vibration directions* (p.v.d.s). The two rays are subject to different refractive indices, so the light travelling along each p.v.d. reaches the opposite side of the crystal at a different time. When the two rays recombine, there is a phase difference between the two rays that causes the *polarisation state* of the transmitted light to be different from that of the incident light. Optical anisotropy in thin samples can be observed by placing the sample between crossed polarising filters in a light box. The bottom filter, between the light source and the sample, is called the polariser. The top filter, between the sample and the observer, is called the analyser. The polariser and analyser have polarising directions perpendicular to one another. ![Photo of a light box](images/light_box.jpg) The apparatus used for examining optical anisotropy consists of a white-light source, two polarising filters and a frame to hold them apart so creating a working space. When no sample is in place the light that reaches the analyser is polarised at 90° to the analyser's polarisation direction, so no light is transmitted to the observer. When a quartz sample (with favourable orientation, see later) is placed between the filters, the crystal changes the polarisation state of the light that is transmitted through it. When this light reaches the analyser, some component of it lies parallel to the polarisation direction of the analyser, and therefore some light is transmitted to the observer. If a quartz slice shows optical anisotropy, the intensity of light transmitted through the analyser varies as a function of the angle of rotation of the quartz sample in the plane of the filters. At certain orientations, no light is transmitted. These 'extinction positions' are found at 90° intervals. Your browser does not support the video tag. Video animation of anisotropic quartz rotated between crossed polarisersWhen the same experiment is done using a piece of glass, it is found that light is not transmitted for *any* orientation. This is because the glass is *optically isotropic*, and does not change the polarisation state of the light passing though it. In quartz, there is one direction of propagation for which no birefringence is observed. If a sample is cut so that the incident light is parallel to this direction, the sample behaves as if it is optically isotropic and no light is transmitted. The crystallographic direction that exhibits this property is known as the *optic axis*. ![Photo of quartz cut to let through no light](images/q_2_s.jpg) When the quartz sample is cut so that the incident light is parallel to the optic axis, no light is transmitted in any orientation. This experiment demonstrates that some single crystals, such as quartz, show anisotropic optical properties. The phenomenon depends on the crystallographic orientation of the crystal with respect to the incident light. Amorphous materials like glass have no 'distinct' crystal directions, so anisotropic properties are generally not observed. Polycrystals Single crystals form only in special conditions. The normal solid form of an element or compound is *polycrystalline*. As the name suggests, a polycrystalline solid or *polycrystal* is made up of many crystals. The properties of a polycrystal are notably different from those of a single crystal. The individual component crystallites are often referred to as grains and the junctions between these grains are known as grain boundaries. The size of a grain varies according to the conditions under which it formed. Galvanised steel has a zinc coating with visibly large grains. Other materials have much finer grains, and require the use of optical microscopy. ![Photograph of galvanised steel](images/galvanised_steel.jpg) In galvanised steel, the grains are big enough to be seen unaided. The plate measures 5 cm across.In many other metals, such as this hypoeutectoid iron-carbon alloy, the grains may only be seen under a microscope. (Click on image to see larger version.) These photographs show a polycrystalline sample of quartz mixed with feldspar in which the grains all have optically anisotropic properties. Between the crossed polarisers, each grain allows transmission of light at a slightly different point in the rotation. This gives the strange effect seen here. This polycrystal contains randomly oriented grains that allow transmission at different angles. Consequently different regions of the polycrystal are seen in these two photographs. (Click on the images to view larger versions.) The three-dimensional shape of grains in a polycrystal is similar to the shape of individual soap bubbles made by blowing air into a soap solution contained in a transparent box. The surface between bubbles is a high-energy feature. If the area of the surface is decreased, the overall energy of the system decreases, so *reduction of surface area is a spontaneous process*. If all the bubbles were the same size, the resulting structure would be a regular *close-packed* array, with 120° angles between the surfaces of neighbouring bubbles. In practise, *bubble growth* can occur because the surface area of a few large bubbles is lower than that of many small bubbles. Large bubbles tend to grow, and small bubbles tend to shrink. The bubbles are therefore different sizes so there are large deviations from the close-packed structure. On average, however, three bubbles meet at a junction, and the angle between the bubble surfaces is usually within a few tens of degrees of 120°. The *curvature* of the surfaces is also important. Surfaces with a smaller radius of curvature have a higher energy than those with a larger radius of curvature. As a result, some small bubbles cannot shrink and disappear, even though the surface area would decrease if they did so. This is because the curvature of the boundaries, and the associated energy, would be too high. In a real polycrystal, the grain boundaries are high-energy features, similar to the surfaces between bubbles. The soap froth is a very good model for the grain structure of a simple polycrystalline material, and many similar features can be observed in the two systems. The soap bubbles are analogous to the grains, and the surfaces of the bubbles are analogous to the grain boundaries. Compare the photographs of the soap bubbles with the micrograph of a polycrystalline material that has been etched to reveal the grain boundaries. | | | | | - | - | - | | | | | The packing of soap bubbles is somewhat similar to the packing of crystals - both systems seek to minimise their surface area. Note the angles at the junctions of grain boundaries. (Click on images to view larger versions.)The grains of this hypereutectoid iron-carbon alloy are packed in a similar way to the bubbles in the previous photographs. (Click on image to view larger version.) Grain boundaries in a polycrystalline solid can *relax* (move in such a way to decrease the total energy of the system) when atomic rearrangement by diffusion is possible. In real materials, many other effects can influence the observed grain structure. Defects = Within a single crystal or grain, the crystal structure is not perfect. The structure contains defects such as vacancies, where an atom is missing altogether, and dislocations, where the perfection of the structure is disrupted along a line. Grain boundaries in polycrystals can be considered as two-dimensional defects in the perfect crystal lattice. Crystal defects are important in determining many material properties, such as the rate of atomic diffusion and mechanical strength. We can use a "shot model" to get a picture of crystal defects. The model consists of many small ball bearings trapped in a single layer between two transparent plates. They tend to behave like the atoms in a crystal, and can show the same kind of defects. When the shot model is held horizontally, so that the balls flow freely, the resulting structure is similar to a liquid.Shot model held horizontally. The balls form a liquid-like structure. (Click on image to view larger version.) As the model is tilted towards the vertical, the balls pack closely together. This represents crystallisation. One or two balls may be suspended above the main body by electrostatic forces: this is comparable to the vapour found above the crystal. In some places the balls form close-packed regions. Tapping of the model causes minor rearrangements of the balls, especially at the top of the "solid" region. This is similar to diffusion, in which case the tapping is analogous to thermal activation. Occasionally, the "diffusion" process may cause two grains to join together, or for some grains to "grow". The following image sequence shows the behaviour of the shot model as it is rearranged by tapping, starting from a polycrystalline state with many small grains and ending with much larger grains. Note the presence of vacancies in the structure. Your browser does not support the video tag. Grain growth in the shot modelWith great care, it may be possible to create a single crystal, as *all* the balls form a single pattern. Note that diffusion occurs mainly near the top of the balls: those towards the middle and bottom do not easily move, as the photographs show. Even in a single crystal, or large-grained sample, there are still vacancies, as the shot model shows. The reason for this involves *entropy*: at all finite temperatures, there will be some disorder in the crystal. The balls within a grain arrange themselves into close-packed planes. In metals, close-packing of atoms is a very common structure. This pattern is typical of *hexagonal close-packed* and *cubic close-packed* lattices. Note that in this 2-D model, each ball touches six others. In a 3-D crystal, such as a real one, each ball would also touch three on the plane above, and three on the plane below. ![Diagram of a close-packed plane](images/close_packed_plane.gif) A close-packed plane. In the shot model, the balls are normally arranged in to a polycrystalline form, shown schematically below: ![Diagram of a polycrystal](images/polycrystal1.gif) ![Diagram of a polycrystal](images/polycrystal2.gif) A polycrystal will typically have crystalline regions (grains) bounded by disordered grain boundaries. These boundaries are marked in the picture on the right. Note that the packing of atoms at the grain boundaries is disordered compared to the grains. At a grain boundary, the normal crystal structure is obviously disturbed, so the boundaries are regions of high energy. The ideal low energy state would be a single crystal. Polycrystals form from a melt because crystallisation starts from a number of centres or nuclei. These developing crystals grow until they meet. Since they are not usually aligned before meeting, the grains need not necessarily be able to fit together as a single crystal, hence the polycrystalline structure. After crystallisation the solid tends to reduce the boundary area, and hence the internal energy, by *grain growth*. This can only happen by a process of *atomic diffusion* within the solid. Such diffusion is more rapid at a higher temperature since it is thermally activated. Summary = The focus of this package is the difference between single crystals, polycrystals and amorphous solids. This is explained in terms of the atomic scale periodicity: single crystals are periodic across their entire volume; polycrystals are periodic across individual grains; amorphous solids have little to no periodicity at all. The different atomic structures can have effects on the macroscopic properties. A single crystal may exhibit anisotropy - we have seen mechanical anisotropy of gypsum, and optical anisotrpy of quartz. Polycrystals may also be anisotropic within each grain, as seen when the polycrystalline quartz-feldspar mix was placed between the crossed polarisers. Amorphous solids do not have anisotropic mechanical or optical properties, since they are isotropic on the atomic scale. Defects may exist in all structures, even single crystals. They include vacancies and grain boundaries, where the regular repeating structure is disrupted. Questions = ### Quick questions*You should be able to answer these questions without too much difficulty after studying this TLP. If not, then you should go through it again!*1. Most single crystals contain: | | | | | - | - | - | | | a | No defects, since they must be perfect crystals. | | | b | Exactly one defect, hence the term '*single* crystal'. | | | c | Many defects. | | | d | More defects than atoms, since every atom must generate at least one defect. | 2. If quartz had optical properties such that the refractive indices for all vibration directions were equal, the crossed polariser experiment would show: | | | | | - | - | - | | | a | No light transmitted for any orientation. | | | b | Varying intensity as the orientation changes. | | | c | Uniform non-zero intensity of transmitted light regardless of orientation. | | | d | Circular dark patches to represent the symmetry of the optical properties. | 3. Bubbles in a box behave in a similar way to grains in a crystal in several ways, but not in all. Which of the following statements is TRUE? | | | | | - | - | - | | | a | The geometry of the places where bubbles meet one another is different from the geometry of the junctions between grains in a real polycrystal. | | | b | The shape of the bubbles is different from the typical shape of a grain. | | | c | The way a bubble deforms when a load is applied is different from the way a grain deforms. | | | d | The three dimensional structure of bubbles in a box is unlike the three dimensional structure of a polycrystal. | 4. Which of the following is false? | | | | | - | - | - | | | a | Quartz crystals have optically anisotropic properties. | | | b | Glass has no regular repeating crystalline structure. | | | c | Certain crystals may cleave easily along certain planes, defined by the crystal structure. | | | d | Crystal defects are not found in single crystals. | 5. What does the 'shot model' fail to show? | | | | | - | - | - | | | a | Polycrystallinity | | | b | Crystalline defects | | | c | The third dimension of the structure | | | d | The difference between a vapour and a solid |### Deeper questions*The following questions require some thought and reaching the answer may require you to think beyond the contents of this TLP.*6. Why is window glass transparent? | | | | | - | - | - | | | a | Because it has a single crystal structure and each sheet is cut with the optic axis normal to the plane of the window. | | | b | Because it has an amorphous structure with large interatomic spacing. Light waves can pass between widely spaced atoms without any interaction with the solid structure. | | | c | Because sheets of glass are cut thin enough for light to pass through without any significant absorption. | | | d | Because of the electronic nature of the bonds between the atoms in the glass. | 7. A quantity of pure liquid aluminium is cooled slowly through its melting point. The solid is then left at room temperature for 100 years. What is the resulting structure? | | | | | - | - | - | | | a | A polycrystal with grains of identical chemical composition but different crystallographic orientation. | | | b | A polycrystal consisting of finely spaced lamellae with alternating composition. | | | c | A single crystal. | | | d | An amorphous solid with good mechanical strength. | 8. *Self-diffusion* is the diffusion of a species within a body of material made from the same species. In general, self-diffusion in a polycrystalline solid can occur through the bulk of the grains (*lattice diffusion*) or along the grain boundaries (*grain boundary diffusion*). Which of the following statements gives the *best* description of the relative contribution of each process to the overall diffusion rate? | | | | | - | - | - | | | a | The contributions should be about the same in both cases. | | | b | The contribution from lattice diffusion will always be greater than the contribution from grain boundary diffusion. | | | c | The contribution from grain boundary diffusion will always be greater than the contribution from lattice diffusion. | | | d | The relative contributions of the two processes depend upon the temperature of the material. | 9. Imagine a polycrystalline solid with cubic grains of edge length *D*. When *D* = 10 μm, what percentage of the volume of solid lies within a grain boundary, if the grain boundary width *d* is 1 nm? What must the grain size *D* be if 10% of the volume lies within a grain boundary? Comment on your answers. 10. Which of the following material properties could show anisotropy? *(answer yes or no for each)* | | | | | | - | - | - | - | | Yes | No | a | Density | | Yes | No | b | Young's modulus | | Yes | No | c | Electrical conductivity | | Yes | No | d | Refractive index |### Open-ended questions*The following questions are not provided with answers, but intended to provide food for thought and points for further discussion with other students and teachers.*11. Think about some of the possible applications of materials showing optical anisotropy, like the quartz crystal. 12. How might you control the grain size of a material produced from a melt? How might the cooling rate and the chemical composition affect the results? Can you think of ways to change the grain structure *after* the material has solidified? 13. In this TLP, we have discussed *pure* materials. Real materials almost always contain some impurities. How might these impurities be incorporated into the crystal structure of a material? Consider the relative size of the impurity atoms and the host atoms. Are impurities always undesirable? 14. Graphite is sometimes used as a lubricant, and diamond can be used on the tips of cutting tools. In terms of the crystal structure, why might this be? 15. Why do the individual grains in a polycrystalline material, such as those in the photo of galvanised steel (on the page) appear to be different colours or shades, when the composition of every grain is approximately the same? Going further = Most 'introductory' materials science textbooks will cover the basic material in this package. The following resources cover the subjects in more detail than this teaching and learning package, and may prove useful to the interested student. ### Books Introduction to Mineral Sciences by Putnis (Cambridge University Press, 1992) Provides a mineral-based treatment of many of the topics introduced in this package. Of particular interest: Chapter 1 on Periodicity and Symmetry Chapter 2 on Anisotropy and Optical Properties, including the phenomenon of birefringence Chapter 5 on Crystal Structures Chapter 7 on Defects in Minerals The Structure of Materials by Allen and Thomas (Wiley, 1999) Gives a thorough mathematical treatment of the noncrystalline and crystalline states (chapters 2 and 3). ### Websites * Contains java-based applets that allow the structure of common polyhedra and crystals to be explored. * A library of 'crystal forms' - the shapes adopted by natural crystals. Contains Java applets. * An excellent tutorial on birefringence. Contains Java applets. ### Other resources The MATTER Project's 'Materials Science on CD-ROM' includes modules on: Introduction to Crystallography (including Miller Indices etc.) Introduction to Point Defects Dislocations
Aims * In this TLP, it is aimed to consider the basic electrochemical principles involved in the operation, design and use of batteries and the technical criteria relevant for battery selection.   Before you start * You should be familiar with the the basic principles of electrochemistry. * You should understand thermodynamics and kinetics.   Introduction <! .style1 {color: #0000FF} .style2 { font-family: "Times New Roman", Times, serif; font-style: italic; } >Electrical energy/power storage has widespread applications as can be seen in the sheer range and diversity of batteries used as a source of electrical power. Batteries range from tiny button cells storing miliwatthours of energy and delivering microwatt power, to gigantic load levelling batteries situated within a factory size building, storing megawatthours of energy and delivering megawatts of power. Basic principles <! .style1 {color: #0000FF} >#### The electrochemical series Different metals (and their compounds) have different affinities for electrons. When two dissimilar metals (or their compounds) are put in contact through an electrolyte, there is a tendency for electrons to pass from one material to another. The metal with the smaller affinity for electrons loses electrons to the material with the greater affinity, becoming positively charged. The metal with the greater affinity becomes negatively charged. A *potential difference* between the two electrodes is thus built up until it balances the tendency of the electron transfer between the metals. At this point the potential difference is the *equilibrium potential*: the potential at which the net flow of electrons is 0. The electrochemical series represents the quantitative expression of the varying affinity of materials relative to each other. In an aqueous electrolyte the standard electrode potential for an electrode reaction is expressed with respect to a reference electrode. Conventionally this is the H2/H+ cell, with reaction: H+ + e ![](eqn/eqn_symbols/Eqn.001.gif) ½ H2 ![](../images/divider400.jpg) #### What is a battery? A battery is an electrochemical cell that converts chemical energy into electrical energy. It comprises of two electrodes: an anode (the positive electrode) and a cathode (the negative electrode), with an electrolyte between them. At each electrode a half-cell electrochemical reaction takes place, as illustrated by the animation below.#### Discharge Electrode 1 is an anode: the electrode is oxidised, producing electrons. Electrode 2 is a cathode: the electrode is reduced, consuming electrons. In the fully charged state, there is a surplus of electrons on the anode (thus making it negative) and a deficit on the cathode (thus making it positive). During discharge, electrons therefore flow from the anode to the cathode in the external circuit and a current is produced. Therefore in simple terms batteries work as electron pumps in the external circuit, preferably with only ionic current flowing through the electrolyte. The electrical potential difference between the cathode and the anode, which can drive the electrons in the external circuit, is called *electromotive force* (emf). Once all the active material at the cathode has been reduced, and/or all the active anodic material is oxidised, the electrode has effectively been used up, and the battery cannot provide any more power. It can then be either disposed of or preferably recycled if it is a primary battery, or recharged if it is a rechargeable (secondary) battery. If the anode were zinc and the cathode were copper the half reactions would proceed as follows: At the anode: Zn →  Zn2+(aq) + 2e–                  Eo = 0.76V At the cathode: Cu2+(aq) + 2e– → Cu               Eo = 0.34V Thus the total potential for this cell is 1.10 V. During use as a battery, discharge leads to dissolution of Zn at the anode and the deposition of Cu at the cathode. Such a cell is embodied in the Daniell Cell introduced in 1836. As a practical cell this required two electrolytes (typically zinc sulphate and copper sulphate aqueous solutions) to avoid polarisation. The electrolytes are separated from each other by a salt bridge or a porous membrane, which allows the sulphate ions to pass and carry the ionic current, but blocks metallic ions. The Daniell Cell is an effective battery but not practical for portability. More recently, however, the idea of using two separate electrolytes has been resurrected in the form of *redox batteries*. #### Charge When the cell potential is depleted the battery can be recharged. When a current is applied to the cell in the opposite direction the anode becomes the cathode, and vice versa. Thus electrode 2 that was oxidised upon discharge is now reduced and the electrode 1 that was reduced is now oxidised so the electrodes are returned to their former state, ready to be discharged again. This time the anode would be copper and the cathode would be zinc, and the half reactions would proceed as follows: At the anode: Zn2+(aq) + 2e–  → Zn             Eo = -0.76V At the cathode: Cu → Cu2+(aq) + 2e–          Eo = -0.34V The minimum potential required for charging will be 1.10 V, as this is the potential of the cell. In reality much higher potentials will be required to overcome the polarisation. ![](../images/divider400.jpg) ![](../images/divider400.jpg) Thermodynamics and kinetics =<! .style1 {color: #0000FF} .style2 { font-family: "Times New Roman", Times, serif; font-style: italic; } .style3 {color: #FF0000} > #### Thermodynamics - The Driving Force The overall reaction in the cell can be described by two half-cell reactions: one for the anode and one for the cathode. The cathodic reaction can be represented as: ![](eqn/eqn_thermodynamics/eq0001M.gif) , Standard electrode potential: ![](eqn/eqn_thermodynamics/eq0002M.gif) where a is the number of moles of A etc. The cathode is usually a metallic oxide or a sulphide, but oxygen is also used. The anodic reaction can be represented as: ![](eqn/eqn_thermodynamics/eq0003M.gif) , Standard electrode potential: ![](eqn/eqn_thermodynamics/eq0004M.gif) The anode is generally a metal, which is electrochemically oxidised to form a metal ion which is soluble in the electrolyte. The anode and the cathode are connected internally through an electrolyte, which is an ionic conductor, thereby providing the medium for transfer of charge as ions between the two electrodes. It is typically a solvent containing dissolved salts, acids or bases. The electronic conduction in the electrolyte should be negligible in order to avoid self-discharge by internal short-circuiting. The overall reaction is given by: ![](eqn/eqn_thermodynamics/eq0005M.gif) The change in standard free energy, ![](eqn/eqn_thermodynamics/eq0006M.gif), is given by: ![](eqn/eqn_thermodynamics/eq0007M.gif) where ![](eqn/eqn_thermodynamics/eq0008M.gif) is the standard cell potential ![](eqn/eqn_thermodynamics/eq0009M.gif) At conditions other than the standard state, E can be given as: \[E = {E^o} - {{RT} \over {ZF}}\ln {{{{(aC)}^c}{{(aD)}^d}} \over {{{(aA)}^a}{{(aB)}^b}}}\] where (aC) is the activity of C etc. This gives: \[\Delta {G^o} = - nFE - {{nRT} \over F}\ln {{{{(aC)}^c}{{(aD)}^d}} \over {{{(aA)}^a}{{(aB)}^b}}}\] The change in ![](eqn/eqn_thermodynamics/eq0006M.gif) of a reaction is the driving force for a battery, which enables it to deliver electrical energy. ![](../images/divider400.jpg) #### Electrode Kinetics (polarisation and cell impedance) Thermodynamics can tell us the feasibility of a cell reaction occurring, and the theoretical cell voltage, however it is necessary to consider kinetics to gain a better idea of what the actual cell voltage may be, since rates of charge transfer are usually the limiting factor. ##### Electrical Double-Layer When a metal electrode is in an electrolyte, the charge on the metal will attract ions of opposite charge in the electrolyte, and the dipoles in the solvent will align. This forms a layer of charge in both the metal and the electrolyte, called the *electrical double layer*. The electrochemical reactions take place in this layer, and all atoms or ions that ore to be reduced or oxidised must pass through this layer. Thus, the ability to pass through this layer controls the kinetics, and is therefore the limiting factor when controlling the electrochemical reaction. ![](figures/Double_layer.png) ##### Rate of reaction The rates of the chemical reactions are governed by the Arrhenius relationship: Rate of reaction: \[\alpha \exp \left( {{{ - \Delta G} \over {RT}}} \right)\] where: ![](eqn/eqn_thermodynamics/eq0013M.gif) is the activation energy for the reaction T is the temperature in Kelvin R is the universal gas constant In this case, the rate of the reaction can be measured by the current produced, since current is the amount of charge produced per unit time, and therefore proportional to the number of electrons produced per unit amount of time; i.e. proportional to the rate. ##### Electrodes away from equilibrium When an electrode is not at equilibrium an overpotential exists, given by ![](eqn/eqn_thermodynamics/eq0014M.gif) where *η* = overpotential, E = actual potential, Eo = equilibrium potential ##### The Tafel equation Consider a general reaction for the oxidation of a metal at an anode: M → Mz+ + ze– The rate of this reaction, ka is governed by the Arrhenius relationship: \[{k\_a} = K\exp \left( {{{ - \Delta G} \over {RT}}} \right)\] where K is the rate constant. From Faradays law: \[\eqalign{ & {i\_o} = zF{k\_a} \cr & {\rm{ }} = zFK\exp \left( {{{ - \Delta G} \over {RT}}} \right) \cr} \] If an overpotential is now applied in the anodic direction, the activation energy of the reaction is reduced to \(\left( {\Delta G - \alpha zF\eta } \right)\), where *α* is the “symmetry factor” of the electrical double layer, usually 0.5. Therefore \[\eqalign{ & {i\_a} = zFK\exp \left( {{{ - \left( {\Delta G - \alpha zF\eta } \right)} \over {RT}}} \right) \cr & {\rm{ }} = zFK\exp \left( {{{ - \Delta G} \over {RT}}} \right)\exp \left( {{{\alpha zF\eta } \over {RT}}} \right) \cr} \] \[{i\_a} = {i\_o}\exp \left( {{{\alpha zF\eta } \over {RT}}} \right)\] This is the *Tafel equation*.   By taking natural logs and rearranging, this can be written as: \[\eta = \left( {{{RT} \over {\alpha zF}}} \right)\ln \left( {{{{i\_a}} \over {{i\_o}}}} \right)\] \[\eta = {b\_a}\log \left( {{{{i\_a}} \over {{i\_o}}}} \right)\] Or, in terms of electrode potential, \[\ln ({i\_a}) = \ln ({i\_o}) + (E - {E^o})\left( {{{\alpha zF} \over {RT}}} \right)\] \[E = {b\_a}\log \left( {{{{i\_a}} \over {{i\_o}}}} \right) + {a\_a}\] , where ba is the anodic Tafel slope.   Similarly, we can consider the reduction of metal ions at a cathode: Mz+ + ze– → M The activation energy will be decreased by \(\left( {1 - \alpha } \right)zF\eta \) giving: \[{i\_c} = {i\_o} + \exp \left( {{{\left( {1 - \alpha } \right)zF\eta } \over {RT}}} \right)\] and \[\eta = {{RT} \over {(1 - \alpha )zF}}\ln \left( {{{{i\_c}} \over {{i\_o}}}} \right)\] Therefore \(E = {b\_c}\log \left( {{{{i\_c}} \over {{i\_o}}}} \right) + {a\_c}\) , where bc is the cathodic Tafel slope. The following is a typical Tafel plot – a plot of log io against E: ![](figures/Tafel.png) Thus for an applied potential, the current density can be found from the Tafel plot. ##### Other limiting factors: At very high currents, a limiting current may be reached as a result of concentration overpotential, ηC(conc). \[{\eta \_C}(conc) = 2.303\left( {{{RT} \over {zF}}} \right)\ln \left( {{i \over {{i\_L}}}} \right)\] where iL is the limiting current (in this case for the cathodic double layer). The limiting current is diffusion limited, and can be determined by Ficks law of diffusion. \[{i\_L} = ZFD\left( {{C \over \delta }} \right)\] where: D = diffusion coefficient of Cu2+ in the electrolyte C = concentration of Cu2+ in the bulk electrolyte δ = the thickness of the double layer Typical values would be: D = 2 × 10-9 m2s-1 C = 0.05 × 104 kg m-3 δ = 6 × 10-4 m which gives iL = 3.2 × 102 A m-2 A Tafel curve showing this diffusion limiting of the current is shown below: ![](figures/Diffusion_limited_Tafel.png) ##### Tafel curves for a battery In a battery there are two sets of Tafel curves present, one for each material. During discharge one material will act as the anode and the other as the cathode. During charging the roles will be reversed. The actual potential difference between the two materials for a given current density can be found from the Tafel curve: The anodic potential, EA, and cathodic potential, EC, can be found from the curve. The total cell potential is the difference between the two. On discharge, the potential is always less than thermodynamics alone predicts. It can be calculated by the equation: \[V{'\_{{\rm{cell}}}} = {E\_C} - \left| {{\eta \_C}} \right| + {E\_A} - \left| {{\eta \_A}} \right|\] Upon discharge the cell potential may be further deceased by the Ohmic drop due to the internal resistance of the cell, r. Thus the actual cell potential is given by: \[{V\_{{\rm{cell}}}} = V{'\_{{\rm{cell}}}} - iAr\] where A = Geometric area relevant to the internal resistance. Similarly on charging the potential is greater than thermodynamics alone predicts, and can be calculated by the equation: \[V{'\_{{\rm{charge}}}} = {E\_C} - \left| {{\eta \_C}} \right| + {E\_A} - \left| {{\eta \_A}} \right|\] The cell potential may now be increased by the Ohmic drop, and the actual cell potential is given by: \[{V\_{{\rm{charge}}}} = V{'\_{{\rm{charge}}}} + iAr\]   Primary batteries =<! .style2 {color: #0000FF} >Primary batteries are not easily rechargeable, and consequently are discharged then disposed of. Many of these are “dry cells” – cells in which the electrolyte is not a liquid but a paste or similar. The cell electrochemical reactions are not easily reversible and cell is operated until the active components in the electrodes are exhausted. Generally primary batteries have a higher capacity and initial voltage than rechargeable batteries. ##### Applications: * Portable devices * Lighting * Toys * Memory back-up * Watches/clocks * Hearing aids * Radios * Medical implants * Defence related systems such as missiles ##### Advantages: * Inexpensive * Convenient * Lightweight * Good shelf life * High Energy density at low/moderate discharges ##### Disadvantages: * Can only be used once * Leads to large amount of waste batteries to be recycled * Batteries put into landfill sites have severe environmental impact * Life cycle energy efficiency < 2 %   The table below demonstrates the properties of various primary batteries: | **System** | **Nominal Cell Voltage (V)** | **Capacity (Wh/kg)** | **Advantages** | **Disadvantages** | **Applications** | | - | - | - | - | - | - | | | 1.50 | 65 | Lowest cost; variety of shapes and sizes | Low energy density; poor low-temperature performance | Torches; radios; electronic toys and games | | **Mg/MnO2** | 1.60 | 105 | Higher capacity than C/Zn; good shelf life | High gassing on discharge; delayed voltage | Military and aircraft receiver-transmitters | | | 1.50 | 95 | Higher capacity than C/Zn; good low-temperature performance | Moderate cost | Personal stereos; calculators; radio; TV | | **Zn/HgO** | 1.35 | 105 | High Energy density; flat discharge; stable voltage | Expensive; energy density only moderate | Hearing aids; pacemakers; photography; military sensors/detectors | | **Cd/HgO** | 0.90 | 45 | Good high and low-temperature performance; good shelf life | Expensive; low energy density | | | | 1.50 | 130 | High Energy density, good high rate performance | Expensive (but cost effective) | Watches; photography; missiles; Larger space applications | | **Zn/Air** | 1.50 | 290 | High Energy density; long shelf life | Dependent on environment; limited power output | Watches; hearing aids; railway signals; electric fences | | **Li/SOCl2** | 3.60 | 300 | High Energy density; long shelf life | Only low to moderate rate applications | Memory devices; standby electrical power devices | | **Li/SO2** | 3.00 | 280 | High energy density; best low-temperature performance; long shelf life | High-cost pressurized system | Military and special industrial needs | | **Li/MnO2** | 3.00 | 200 | High energy density; good low-temperature performance; cost effective | Small in size, only low-drain applications | Electrical medical devices; memory circuits; fusing |   Zinc/carbon batteries =<! .style2 {color: #0000FF} >This is commonly known as the Leclanché Cell and despite being the oldest type of primary battery it is still the most commonly used as it is very low-cost. ![](figures/leclanche.png) Georges Leclanché The first cell was produced by Georges Leclanché in 1866 and was the first cell to contain only one low-corrosive fluid electrolyte with a solid cathode. This gave it a low self discharge in comparison to previously attempted batteries. The original cell consisted of a solid Zinc anode with an ammonium chloride solution as the electrolyte immobilized in the form of a paste (hence called a “dry cell”), and an 1:1 mixture of powdered carbon and manganese dioxide packed around a carbon rod acting as a cathode. In another version for extra heavy-duty applications, the electrolyte is zinc chloride mixed with a small amount of ammonium chloride. The most common variant is the alkaline cell where the electrolyte is potassium hydroxide. ![](../images/divider400.jpg) #### Characteristics in brief Voltage: 1.5 – 1.75 V Discharge characteristics: Generally sensitive to external factors. Generally very sloped. Better when discharged intermittently. Service Life: 110 min (continuous use) Shelf life: ~ 1 – 2 years (at room temperature) ![](../images/divider400.jpg) #### Chemistry The zinc/carbon cell uses a zinc anode and a manganese dioxide cathode; the carbon is added to the cathode to increase conductivity and retain moisture; it is the **manganese dioxide** that takes part in the reaction, **not** the carbon. The overall reaction in the cell is: Zn + 2MnO2 → ZnO + Mn2O3 The exact mechanism for this is complicated, and there is still controversy over the exact mechanism, however the approximate half-cell reactions are: Anode: Zn → Zn2+ + 2e– Cathode: 2NH4+ + 2MnO2 + 2e– → Mn2O3 + H2O + 2NH3 However, this is complicated by the fact that the ammonium ion produces 2 gaseous products: 2NH4+ + 2e– → 2NH3 + H2 These products must be absorbed in order to prevent build up of pressure in the vessel. This occurs by 2 mechanisms: ZnCl2 + 2NH3 → Zn(NH3)2Cl2 2MnO2 + H2 → Mn2O3 + H2O ![](../images/divider400.jpg) #### Construction The cell has two basic designs: the cylindrical cell and the flat cell. ##### Cylindrical Cell The zinc serves as both the container and the anode. The manganese dioxide/carbon mixture is wetted with electrolyte and shaped into a cylinder with a small hollow in the centre. A carbon rod is inserted into the centre, which serves as a current collector. It is also porous to allow gases to escape, and provides structural support. The separator is either cereal paste or treated absorbent kraft paper (the kind of brown paper used to make large envelopes or grocery bags).   ![](figures/ZincCarbon.png)   Carbon cathode This is made of powdered carbon black and electrolyte. It adds conductivity and holds the electrolyte. The MnO2 to Carbon ratios vary between 10:1 and 3:1, with a 1:1 mixture being used for photoflash batteries, as this gives a better performance for intermittent use with high bursts of current. Historically the carbon black was graphite, however acetylene black is often used in modern batteries as it can hold more electrolyte. Manganese dioxide everal grades of MnO2 are available:* Natural Manganese Dioxide: ores occur naturally in Gabon, Greece and Mexico with 70 – 85% MnO2 * Activated Manganese Dioxide * Chemically synthetic Manganese Dioxide: 90 – 95% MnO2 * Electrolytic Manganese Dioxide (EMD): higher cell capacity and rate capabilities, and less polarization. Used in industrial applications. Electrolyte A standard Leclanché cell uses a mixture of ammonium chloride and zinc chloride in aqueous solution. A zinc-corrosion inhibitor is also added, which forms an oxide layer. This inhibitor is usually mercuric oxide or mercurous chloride. A typical electrolyte composition is: | | | | - | - | | NH4Cl | 26.0% | | ZnCl2 | 8.8 % | | H2O | 65.2 % | | Corrosion inhibitor | 0.25 - 1.0 % | Carbon rodThis is inserted into the cathode and acts as a current collector. It also provides structural support and vents hydrogen gas that evolves as the reactions proceed. When raw the rods are very porous, so must be treated with waxes or oils to prevent loss of water, but remain porous enough to allow hydrogen to pass through. Ideally they should also prevent oxygen entering the cell, as this would aid corrosion of the zinc. SeparatorThis can be either a gelled paste, or kraft paper coated with cereal. It physically separates the anode and the cathode, but allows ionic conduction to occur in the electrolyte. Paste: The paste is flowed into the zinc can, and the carbon cathode inserted, forcing the paste up the sides of the can between the zinc and the cathode, where it sets. Paper: The paper is coated with cereal, or another gelling agent, rolled into a cylinder and along with a circular bottom sheet is added to the can. The carbon cathode is added, and the rod inserted, pushing the paper against the walls of the cans. This compression releases some electrolyte from the cathode mix, soaking the paper.As the paste is relatively thick, more electrolyte can be held by the paper than the paste, giving increased capacity, thus paper is usually the preferred separator. Seal This can be asphalt pitch, wax/resin mix, or plastic (usually polyethylene or polypropylene) An airspace is usually left between the seal and the cathode to allow for expansion. The function of the seal is to prevent evaporation of the electrolyte, and to prevent oxygen entering the cell and corroding the zinc. JacketThe jacket provides strength and protection, and will hold the manufacturers label. It contains various components, which can be metal, paper, plastic, mylar, cardboard (sometimes asphalt-lined) and foil. Electrical contactsThese are the terminals of the battery, and are tin plated steel or brass. They aid conductivity and prevent exposure of the zinc. Alkaline/manganese oxide batteries <! .style1 {color: #0000FF} >This primary battery system has a higher capacity than the zinc/carbon cell. It has a very good performance at high discharge rates and continuous discharge and at low temperatures. The first modern alkaline cell was developed in the 1960s and by 1970 it was produced all over the world. Currently over 15 billion alkaline cells are used worldwide each year. ![](../images/divider400.jpg) #### Chemistry The active materials used are the same as in the Leclanché cell – zinc and manganese dioxide. However the electrolyte is potassium hydroxide, which is very conductive, resulting in low internal impedance for the cell. This time the zinc anode does not form the container; it is in the form of a powder instead, giving a large surface area. The following half-cell reactions take place inside the cell: At the anode:  Zn + 2OH– → Zn(OH)2 + 2e–                       Zn(OH)2 + 2OH– → [Zn(OH)4]2– At the cathode: 2MnO2 + H2O + 2e– → Mn2O3 + 2OH– For full discharge: MnO2 + 2H2O + 2e– → Mn(OH)2 + 2OH– Overall: Zn + 2MnO2 → ZnO + Mn2O3 For full discharge: Zn + MnO2 + 2H2O → Mn(OH)2+ Zn(OH)2 It is not possible to describe the cathodic reaction on discharge in a simple unambiguous way, despite a lot of research. In fact the discharge curve has two fairly distinct sections corresponding to change in the oxidation state of Mn from +4 to +3 and then to +2 during the reduction of MnO2. The reality is more complicated than described in the two reactions shown above. ![](../images/divider400.jpg) #### Construction This cell is “inside out” compared to the Leclanché cell - the manganese dioxide cathode is external to the zinc anode, giving better diffusion properties, and lower internal resistance. ![](figures/Alkaline_manganese_dioxide.png)   Cathode For an alkaline cell electrochemically produced MnO2 must be used. The ore rhodochrosite (MnCO3) is dissolved in sulphuric acid, and electrolysis is carried out under carefully controlled conditions using titanium, lead alloys or carbon for the electrode onto which the oxide is deposited. This gives the highest possible purity, typically 92 ± 0.3%.The cathode itself also contains around 10% graphite – more for more powerful batteries. A typical composition would be: 70% MnO2 (of which 10% is water); ~10% graphite; 1-2% acetylene black; Balance: binding agents and electrolyte.   Zinc AnodeThe zinc must be very pure (99.85 – 99.90%) and is produced by electroplating or distilling. Very small amounts of lead are sometimes added to help prevent corrosion (usually ~0.05%) The zinc is powdered by discharging a small stream of molten zinc into a jet of air “atomising” it. The powder contains particles between 0.0075 and 0.8 mm. There are two methods of formation of the anodes from the powder: * **Gelled anodes:** These contain around 76% Zn, 7% mercury, 6% sodium carboxymethyl cellulose and 11% KOH solution. It is extruded into the cell, as the viscosity is high. In very small cells, NaOH is added to reduce creepage around the seal area. However this mixture is not ideal: it does not fully utilise the zinc at high current densities. Two-phase anodes have therefore been developed, consisting of a clear gel phase and a more compact zinc-powder gel phase, which enables 90% zinc usage. * **Porous anodes:** The zinc powder is wetted with mercury and cold pressed, welding the particles together. The porosity can be controlled by materials such as NH4Cl or plastic binders if required, which can be removed later. These anodes can carry very high currents.   Separators These cells usually use “macro porous” separators. These are made from woven or felted materials. Zinc/silver oxide batteries =<! .style1 {color: #0000FF} >The zinc/silver oxide batteries (first practical zinc/silver oxide primary battery was developed in the 1930s by André; Volta built the original zinc/silver plate voltaic pile in 1800) are important as they have a very high energy density, and can deliver current at a very high rate, with constant voltage. However the materials are high cost, so it is limited to application in button cells, for use in calculators, watches hearing aids and other such applications that require small batteries and long service life. ![](../images/divider400.jpg) #### Characteristics in brief Voltage: around 1.6 V, linearly dependent on temperature. Discharge characteristics: Very good – flat discharge curve. Service life: several thousand hours (continuous use). Shelf life: several years (at room temperature). ![](../images/divider400.jpg) #### Chemistry The silver oxide used is usually in the monovalent form (Ag2O), as it is the most stable. The following reactions take place inside the cell: At the anode: Zn + 2OH– → Zn(OH)2 + 2e– At the cathode: Ag2O + H2O +2e– → 2Ag + 2OH– Overall: Ag2O + H2O + Zn → 2Ag + Zn(OH)2 ![](../images/divider400.jpg) #### Construction   ![](figures/ZincSilver.png) The cathode is generally composed of monovalent silver oxide with added graphite to improve conductivity. The anode is zinc powder mixed with a gelling agent, which is then dissolved in the alkaline electrolyte. The two are separated by a combination of layers of grafted plastic membrane, treated cellophane and non-woven absorbent fibres. The top cup (negative terminal) is made up of laminated layers of copper, tin, steel and nickel, and the bottom cup (positive terminal) is nickel-plated steel. An insulating gasket prevents contact between the two. Secondary batteries =<! .style1 {color: #0000FF} >Secondary (rechargeable) batteries can be recharged by applying a reverse current, as the electrochemical reaction is reversible. The original active materials at the two electrodes can be reconstituted chemically and structurally by the application of an electrical potential between the electrodes to “inject” energy. These batteries can be discharged and recharged many times. #### Applications:These fall into two categories: (a) The battery is used as an energy storage device. It is constantly connected to an energy source and charged by it. It can then release the stored energy whenever needed, e.g. in * Car battery used to start engine * Aircraft systems * Standby power resources * Emergency no-fail systems (b) The battery is used as a primary battery would be but is then recharged instead of being disposed of, e.g. in * Electric vehicles * Mobile phones * Cameras * Power tools * Toys * Portable computers #### Advantages: * High power density * High discharge rate * Good low temperature performance #### Disadvantages: * Lower Energy density * Poorer charge retention * Safety issues * Lack of standards * High initial costs   The table below demonstrates the properties of various rechargeable batteries: | **System** | **Nominal Cell Voltage (V)** | **Capacity (Wh/kg)** | **Advantages** | **Disadvantages** | **Applications** | | - | - | - | - | - | - | | | 2.00 | 35 | Low cost; good high and low-temperature operation | Low cycle life; low energy density; poor charge retention | Cars; lawn mowers; aircraft | | **Ni/Cd** | 1.20 | 30 | Good physical durability; good charge retention; good cycle life | High cost; memory effect | Aircraft; emergency power applications | | **Ni/Fe** | 1.20 | 60 | Good physical durability; long cycling and standing life | Low power and energy density; high self discharge; high cost | Stationary applications; fork lift trucks | | **Ni/Zn** | 1.60 | 27 | High energy density; low cost; good low-temperature performance | Poor cycle life | Electric scooters/bikes; military vehicles | | **Zn/AgO** | 1.50 | 90 | Highest energy density; low self discharge; high discharge rate | High cost; low cycle life; low performance at low temperatures | Military equipment eg torpedo propulsion, submarines | | **Cd/AgO** | 1.20 | 55 | High energy density; low self discharge; Good cycle life | High cost; low performance at low temperatures | Portable power tools; satellites | | **Ni/H2** | 1.40 | 55 | High energy density; good cycle life; can tolerate over charge | High initial cost; self discharge proportional to H2 pressure | Aerospace | | **Ag/H2** | 1.40 | 80 | High energy density; good cycle life | High cost - limited to military and aerospace applications | Aerospace | | | up to 4.2 | 135 | High specific energy; good shelf life; moldable; non-volatile | High cost; expensive control methods needed for charge/discharge | Mobile phones |     Lead/acid batteries =<! .style1 {color: #0000FF} >The lead acid battery is the most used secondary battery in the world. The most common is the SLI battery used for motor vehicles for engine **S**tarting, vehicle **L**ighting and engine **I**gnition, however it has many other applications (such as communications devices, emergency lighting systems and power tools) due to its cheapness and good performance. It was first developed in 1860 by Raymond Gaston Planté. Strips of lead foil with coarse cloth in between were rolled into a spiral and immersed in a 10% solution of sulphuric acid. The cell was further developed by initially coating the lead with oxides, then by forming plates of lead oxide by coating an oxide paste onto grids. The electrodes were also changed to a tubular design. ![](../images/divider400.jpg) #### Characteristics in brief (for an SLI battery) Voltage: 2 V Discharge characteristics: Generally quite curved, particularly at higher discharge rate. Best performance with intermittent discharge. Service Life: Several years ![](../images/divider400.jpg) #### Chemistry The lead acid battery uses lead as the anode and lead dioxide as the cathode, with an acid electrolyte. The following half-cell reactions take place inside the cell during discharge: At the anode: Pb + HSO4– → PbSO4 + H+ + 2e– At the cathode: PbO2 + 3H+ + HSO4– + 2e– → PbSO4 + 2H2O Overall: Pb + PbO2 +2H2SO4 → 2PbSO4 + 2H2O During the charging process, the reactions at each electrode are reversed; the anode becomes the cathode and the cathode becomes the anode. GassingDuring charging, given the high voltage, water is dissociated at the two electrodes, and gaseous hydrogen and oxygen products are readily formed leading to the loss of the electrolyte and a potentially explosive situation. Sealed batteries are made safer by allowing the gases to recombine within the cell. SulphationUnder certain circumstances the lead sulphate products at both the electrodes achieve an irreversible state, making the recharging process very difficult. ![](../images/divider400.jpg) #### Construction ![](figures/Lead_acid.png) LeadPure lead is too soft to use as a grid material so in general the lead is hardened by the addition of 4 – 6% antimony. However, during the operation of the battery the antinomy dissolves and migrates to the anode where it alters the cell voltage. This means that the water consumption in the cell increases and frequent maintenance is necessary. There are two possible solutions to this problem: (1) Using below 4% the battery water consumption is reduced, however it is then necessary to add small amounts of other elements such as sulphur, copper, arsenic and selenium. These act as grain refiners, decreasing the grain size of the lead and thereby increasing its hardness and strength. (2) Alkaline earth metals such as calcium can be used to stiffen the lead. This is often used for telephone applications, and for no maintenance automotive batteries, since a more stable battery is required. A typical alloy would be 0.03 – 0.10% calcium and 0.5 – 1.0% tin (to enhance mechanical and corrosion properties). The function of the grid is to hold the active material and to conduct electricity between the active material and the battery terminals. The design is a simple grid framework with a “tab” or “lug” for connection to the terminal post. “Book mold” casting is the most common method of production for the grid. Permanent steel molds are made from blocks by machining. The molds are closed and filled with sufficient molten lead to fill the mold, leaving some excess to form a sprue, which is then removed by cutting or stamping. Grids can also be formed by mechanical working, either by cutting deep grooves into a sheet of steel, or by rolling up crimped strips and inserting them into holes in a cast plate, see . Lead Oxide The lead can be oxidised by two processes: The Barton pot and the ball mill. * Barton pot: A fine stream of molten lead is inserted into a heated vessel. Each droplet reacts with the air to form an oxide layer, giving 70 – 85% lead oxide. * Ball milling: Pieces of lead are put into a rotary mechanical mill, forming fine lead flakes, which are then oxidised in air and removed. This also gives 75 – 80% lead oxide. Red lead (Pb3O4) can also be added to the PbO formed by these methods, as it is more conductive. This is produced from PbO by roasting in a flow of air. This process would also increase the percentage of lead oxide in the material. The oxide is mixed with water, sulphuric acid and a mixer, and then mixed to form a paste. It is then integrated with the grid by extrusion to form a plate. The paste is pressed by a machine into the interstices of the grid. They are partially dried, then stacked for curing. The curing process transforms the paste to a cohesive, porous solid. The most typical form of curing is “hydrosetting”: the grid is left at low temperature and humidity (25 – 40°C and 8 – 20% H2O) for between 24 and 72 hours. AssemblyThe simplest cell would consist of one cathode plate, one anode plate and a separator between them. In practice, most cells contain up to 30 plates with separators between. The separators are usually cellulose, PVC, rubber, microporous polyethylene or non-woven polypropylene. The plates are stacked and welded together. The tabs that are fixed to the plates are cast, then punched on between the layers and welded together. The plates are suspended inside the case, which is filled with electrolyte in order to activate it. Lithium batteries =<! .style1 {color: #0000FF} .style2 { font-family: "Times New Roman", Times, serif; font-style: italic; } .style3 {font-family: "Times New Roman", Times, serif} >One of the main attractions of lithium as an anode material is its position as the most electronegative metal in the combined with its low density, thus offering the largest amount of electrical energy per unit weight among all solid elements. In many applications the weight of the battery is a significant percentage of the total weight, and there is great competition to make lighter batteries. Li cannot be used with the traditional aqueous electrolytes due to the very vigorous corrosive reaction between Li and water with flammable hydrogen as the product. It took many years to develop a suitable electrolyte based on organic solvents with sufficient stability and ionic conductivity. Ionic conductivity is induced by dissolving a suitable Li salt in an organic solvent used in the form of a gel or immobilised within a polymeric separator. In the 1980s progress was made in the use of Li as an anode material with MnO2, liquid SO2 or thionyl chlorides as the cathode, and hexaflurophosphate dissolved in propylene carbonate as a typical organic electrolyte. Li cells are generally properly sealed against contact with air and moisture Whilst the primary lithium battery has been well established for nearly two decades, there have been many problems experienced whilst developing rechargeable lithium batteries, mainly due to the extreme reactivity of lithium, and they have only become widely available since mid 1990s. ![](../images/divider400.jpg) #### Rechargeable batteries Li-ion batteries are now used in very high volumes in a number of relatively new applications, such as in mobile phones, laptops, cameras and many other consumer products. The typical Li-ion cells use carbon as the anode and LiCoO2 or LiMn2O4 as the cathode.  The first commercial Li-ion cell introduced by Sony in the 90's used a polymeric gel electrolyte, swollen by a high proportion of organic solvent for the Li salt. The translucent gel achieved a respectable ionic conductivity of 3×10-3 S cm-1 at 300 K. ![](../images/divider400.jpg) #### Chemistry and construction In order to overcome the problems associated with the high reactivity of lithium, the anode material is not purely the metal, it is a non-metallic compound, e.g. carbon, which can store and exchange lithium ions. A lithium ion-accepting material, for example CoO2, is then used as the cathode material, and lithium ions are exchanged back and forth between the two during discharging and charging. These are called intercalation electrodes. This type of battery is known as a “rocking chair battery” as the ions simply “rock” back and forth between the two electrodes.   ![](figures/Lithiumion.png) Cathode materials The most common compounds used for cathode materials are LiCoO2, LiNiO2 and LiMn2O4. Of these, LiCoO2 has the best performance but is very high in cost, is toxic and has a limited lithium content range over which it is stable. LiNiO2 is more stable, however the nickel ions can disorder. LiMn2O4 is generally the best value for money, and is also better for the environment. Anode material The anode material is carbon based, usually with composition Li0.5C6. This lithium content is lower than would be ideal, however higher capacity carbons pose safety issues. Electrolyte Since lithium reacts violently with water, and the cell voltage is so high that water would decompose, a non-aqueous electrolyte must be used. A typical electrolyte is LiPF6 dissolved in an ethylene carbonate and dimethyl carbonate mixture. After initial charging the following reactions take place upon discharge: At the cathode: xLi+ + Mn2O4 + xe- → LixMn2O4 At the anode: LixC6→ xLi+ + 6C + xe- Overall: LixC6 + Mn2O4 → LixMn2O4 + 6C ![](../images/divider400.jpg) #### Lithium polymer batteries Another way of overcoming the high reactivity of lithium is to use a solid polymer electrolyte. Using lithium metal gives a higher energy density, higher cell potential and very low self discharge, so if the safety issues can be overcome, it would be the preferred anode material. Another problem to overcome is the high resistivity of the polymer electrolyte. One possible solution is to use the electrolyte as a very thin film to decrease the total resistance. ![](figures/LiPoly_sml.png) ![](../images/divider400.jpg) #### Cell capacity and specific energy density It is important to specify the exact steps taken when calculating the theoretical cell capacity and the maximum specific energy density of a given lithium cell. For full lithium utilisation, the cell capacity is 3860 mAh/g of lithium, simply calculated by Faradays laws. Thus, the actual rated capacity of the cell in mAh is determined by the weight of lithium in the cell. The actual specific capacity, on the other hand, is usually calculated as the actual rated capacity divided by the weight of lithium in the cell (and quoted as mAh/g of Lithium) or, less frequently, as the ratio of the rated capacity and the weight of the cell (and quoted as mAh/g of the cell). In a general case, the cell weight can be calculated as follows: ![](../images/divider400.jpg) #### Li-ion battery In order to maximise the specific energy density, it is desirable to minimise the weight of the cell, while maximising the ratio of weight of lithium to the weight of the cell. For the Li-ion cell, for example, the theoretical stoichiometric value of the anodic multiplier (fA) is 10.3, while for the cathode (fC) is 25. Thus the maximum theoretical specific energy density for a max 4.2 V is calculated to be between 380 to 460 Wh/kg, depending upon whether the weight of the auxiliary components are taken into account. The stoichiometric value for the carbon anode arises from the fact that lithium is intercalated into the carbon structural layers at the max possible molar ratio of 1 Li atom to 6C atoms giving rise to the limiting formula of LiC6. In practice, Li availability in the anode is only 50% of the theoretical maximum corresponding to the formula Li0.5-xC6 where x can vary from 0 (fully charged state) to 0.5 (fully discharged state). For the cathode, Li is intercalated into a perovskite structure of LiCoO2. Although it appears that, theoretically, the maximum ratio of Li to CoO2 is one, in practice the formula corresponds to Li0.5+xCoO2, where x=0 corresponds to the fully charged state and x=0.5 corresponds to the fully charged state. Thus the available capacities for both the anode and the cathode are more than halved, with the multipliers typically being, fA>21 and fC>50. The excess value arises from the weight of the binder and other additives. The operating voltage during discharge decreases from a max value of 4.2 to a cut-off value of 2.8 V, giving an average value of 3.35 V over the discharge cycle. The practical specific energy density is therefore in the region of 160 Wh/kg for a Li-ion cell. The only practical method for increasing the specific energy density of a Li-ion cell is to decrease the weight of the auxiliary components of the cell. It is widely believed that with a considerable amount of research and development the maximum specific energy density that can be achieved for a Li-ion cell within the next five years will reach 220 Wh/kg of the cell.  The cycle life of Li-ion batteries are between 500 to 1000 cycles. Battery characteristics =<! .style1 { font-family: "Times New Roman", Times, serif; font-style: italic; } .style2 {color: #FF0000} >The following battery characteristics must be taken into consideration when selecting a battery: * * * * * * * * * * * * * * * 1) Type See and batteries page.   2) Voltage The theoretical standard cell voltage can be determined from the using Eo values: Eo (cathodic) – Eo (anodic) = Eo (cell) This is the standard theoretical voltage. The theoretical cell voltage is modified by the Nernst equation, which takes into account the non-standard state of the reacting component. The Nernstian potential will change with time either because of use or self-discharge by which the activity (or concentration) of the electro-active component in the cell is modified. Thus the nominal voltage is determined by the cell chemistry at any given point of time. The actual voltage produce will always be lower than the theoretical voltage due to polarisation and the resistance losses  (IR drop) of the battery and is dependent upon the load current and the internal impedance of the cell. These factors are dependent upon electrode kinetics and thus vary with temperature, state of charge, and with the age of the cell. The actual voltage appearing at the terminal needs to be sufficient for the intended application. Typical values of voltage range from 1.2 V for a Ni/Cd battery to 3.7 V for a Li/ion battery. The following graph shows the difference between the theoretical and actual voltages for various battery systems: ![](figures/theoreticalactual.png)   3) Discharge Curve The discharge curve is a plot of voltage against percentage of capacity discharged. A flat discharge curve is desirable as this means that the voltage remains constant as the battery is used up.   4) CapacityThe theoretical capacity of a battery is the quantity of electricity involved in the electro-chemical reaction. It is denoted Q and is given by: $$Q = xnF$$ where x = number of moles of reaction, n = number of electrons transferred per mole of reaction and F = Faraday's constant The capacity is usually given in terms of mass, not the number of moles: \[Q = {{nF} \over {{M\_r}}}\] where Mr = Molecular Mass. This gives the capacity in units of Ampere-hours per gram (Ah/g). In practice, the full battery capacity could never be realised, as there is a significant weight contribution from non-reactive components such as binders & conducting particles, separators & electrolytes and current collectors & substrates as well as packaging. Typical values range from 0.26 Ah/g for Pb to 26.59 Ah/g for H2.   5) Energy density The energy density is the energy that can be derived peer unit volume of the weight of the cell.   6) Specific energy density The specific energy density is the energy that can be derived per unit weight of the cell (or sometimes per unit weight of the active electrode material). It is the product of the specific capacity and the operating voltage in one full discharge cycle. Both the current and the voltage may vary within a discharge cycle and thus the specific energy derived is calculated by integrating the product of current and voltage over time. The discharge time is related to the maximum and minimum voltage threshold and is dependent upon the state of availability of the active materials and/or the avoidance of an irreversible state for a rechargeable battery.   7) Power density The power density is the power that can be derived per unit weight of the cell (W/kg).   8) Temperature dependence The rate of the reaction in the cell will be temperature dependant according to theories of kinetics. The internal resistance also varies with temperature; low temperatures give higher internal resistance. At very low temperatures the electrolyte may freeze giving a lower voltage as ion movement is impeded. At very high temperatures the chemicals may decompose, or there may be enough energy available to activate unwanted, reversible reactions, reducing the capacity. The rate of decrease of voltage with increasing discharge will also be higher at lower temperatures, as will the capacity- this is illustrated by the following graph: ![](figures/Temperature.png)   9) Service life The battery cycle life for a rechargeable battery is defined as the number of charge/recharge cycles a secondary battery can perform before its capacity falls to 80% of what it originally was. This is typically between 500 and 1200 cycles. The battery shelf life is the time a battery can be stored inactive before its capacity falls to 80%. The reduction in capacity with time is caused by the depletion of the active materials by undesired reactions within the cell. Batteries can also be subjected to premature death by: * Over-charging * Over-discharging * Short circuiting * Drawing more current than it was designed to produce * Subjecting to extreme temperatures * Subjecting to physical shock or vibrations  10) Physical requirements This includes the geometry of the cell, its size, weight and shape and the location of the terminals.   11) Charge/Discharge cycle There are many aspects of the cycle that need consideration, such as: * Voltage necessary to charge * Time necessary to charge * Availability of charging source * Potential safety hazards during charge/discharge   12) Cycle life The cycle life of a rechargeable battery is the number of discharge/charge cycles it can undergo before its capacity falls to 80%.   13) Cost This includes the initial cost of the battery itself as well as the cost of charging and maintaining the battery.   14) Ability to deep discharge There is a logarithmic relationship between the depth of discharge and the life of a battery, thus the life of a battery can be significantly increased if it is not fully discharged; for example, a mobile phone battery will last 5-6 times longer if it is only discharged 80% before recharging. Special deep discharge batteries are available for applications where this might be necessary.   15) Application requirements The battery must be sufficient for the intended application. This means that it must be able to produce the right current with the right voltage. It must have sufficient capacity, energy and power. It should also not exceed the requirements of the application by too much, since this is likely to result in unnecessary cost; it must give sufficient performance for the lowest possible price.   The future <! .style1 {color: #0000FF} .style3 {color: #FF0000} >The future of battery technology now lies in the concept of fuel cells. A fuel cell, like a battery is an electrochemical cell. It has an anode and a cathode, and an oxidation reaction and a reduction reaction occur, however the electrodes act as catalysts, and are not used up in these reactions. Instead, the reactions take place within a “fuel,” which provides the source for the electrons. This is a very efficient method of producing electricity (up to 85% efficient – thats over three times as efficient as an internal combustion engine.) They are also very environmentally friendly – the most environmentally friendly cells produce only water and heat. At the moment the only disadvantage is that they are very costly, however it is very likely that in the future fuel cells will be the power source of choice for many applications. For more information on Fuel cells, see the . Questions = ### Quick questions*You should be able to answer these questions without too much difficulty after studying this TLP. If not, then you should go through it again!*1. Which of the following statements is not true? | | | | | - | - | - | | | a | A battery is an electrochemical cell. | | | b | If an electrode acted as the anode during discharge it will act as a cathode during charging. | | | c | During the operation of a battery, one material is oxidised and another is reduced. | | | d | The voltage needed to charge a battery is less than the voltage produced when it is discharged. | 2. What can thermodynamics and kinetics tell us about the reactions that occur in a battery? | | | | | - | - | - | | | a | They both tell us if it is possible for the reaction to occur or not. | | | b | Thermodynamics tells us if it is possible for a reaction to occur or not, kinetics tells us what the rate of this reaction will be. | | | c | They both tell us what the rate of the reaction will be. | | | d | Kinetics tells us if it is possible for a reaction to occur or not, thermodynamics tells us what the rate of this reaction will be. | 3. Which of the following pieces of information cannot be gained from a Tafel plot? | | | | | - | - | - | | | a | The maximum voltage that can be expected whilst drawing or supplying a particular current density. | | | b | The minimum voltage that can be expected whilst drawing or supplying a particular current density. | | | c | The anodic and cathodic potentials in a battery for a given current density. | | | d | The diffusion limited current density. | 4. Are the following primary or rechargeable batteries? (a) Zinc/Carbon battery (b) Lithium ion battery (c) Lead/acid battery (d) Alkaline/Managanese dioxide battery (e) Nickel/Cadmium battery (f) Zinc/Air battery (g) Zinc/Silver Oxide battery (h) Silver/Hydrogen battery (i) Potato battery 5. Using the , what would the theoretical potential difference be in the following cells, if only thermodynamics was considered? (a) Zinc and copper (b) Nickel and mercury (c) Zinc and silver (d) Lithium and silver 6. What batteries might you use to power the following applications? (a) A mobile phone (b) A hand-held computer console (b) A wristwatch (d) To start a car Going further = ### Books * *High Density Energy Lithium Batteries;* KE Aifantis, SA Hackney and RV Kumar (Editors), Wiley-VCH Verlag, 2010 ###  
Aims On completion of this TLP package, you should: * Understand the stress distribution within beams subject to bending or torsion. * Be familiar with the concepts of the radius of curvature of a section of a beam (and its reciprocal, the curvature), second moment of area, polar moment of inertia, beam stiffness and torsional stiffness. * Be able to calculate the moments acting in a beam subject to bending or torsion. * Be able to calculate the deflections of a beam on bending and the angle of twist of a bar under torsion. * Be able to predict the effect of plastic deformation, at least with simple beam geometry. Before you start There are no specific prerequisites for this TLP, but it would be useful to be familiar with *stress* and *strain*, *elastic strain* and *Plastic deformation*, *Young modulus*, E and *yield stress*, σY. While a basic knowledge of mechanical deformation is assumed, this teaching and learning package covers all the fundamentals of beam mechanics. Introduction ***Beam stiffness*** is an important concept for many types of structure, particularly those with slender shapes. Inadequate beam stiffness can lead to large deflections, and may also cause high localised stresses and a danger of failure in that region. In addition to bending moments, such structures may be subjected to twisting, or torsional moments (torques). In fact, virtually all structures, including buildings and many natural structures (trees, bones etc) are commonly subjected to significant applied moments. It is important to recognise the roles of structural shape, applied loads and material properties when predicting the resultant moments, deflections and stress distributions. The aim of this TLP is to provide the necessary information to allow such bending and torsional moments, deflections (both elastic and elastoplastic) and stress distributions to be predicted and understood. Pole-vaulting = Pole vaulting as an athletic activity dates back to the ancient Greeks. Modern competition started around the turn of the 20 th century, when the Olympic Games were restarted. A sharp increase in the achievable height coincided with the advent of composite (fibreglass) poles, about 50 years ago. These are sufficiently strong and flexible to allow substantial amounts of energy (kinetic energy of the athlete) to be transformed into elastic strain energy stored in the deformed pole, and subsequently transformed again into potential energy (height of the athlete) as the pole recovers elastically. The mechanics of beam bending is clearly integral to this operation. ![](images/pole_vault_graph.jpg)![](images/pole_vault.jpg) The sharp increase in achievable height that coincided with the switch to composite poles was due to a change in the mechanics of pole vaulting. Bamboo or metal poles with sufficient flexibility to allow significant energy storage would, respectively, be likely to fracture or plastically deform. Visual inspection of a bent pole (see photo) is all that's needed to estimate the distribution of axial strains (and hence stresses) within its cross-section. The pole has a diameter of about 50 mm and it can be seen in the photo that it is being bent to a (uniform) radius of curvature, *R* , of the order of 1 m (~ length of the athlete's legs!). Considering a section of unit length (unstrained) in the diagram below, the angle *θ* (~tan *θ* ) ≈ *1/R* after bending (where *R* is the radius of curvature). From the two similar triangles in the diagram, *θ* is also given by the surface strain *ε* divided by *r* , the radius of the pole . The surface strain, *ε*, is thus given by the ratio *r* / *R* , which has a value here of about 2.5 %. This strain is compressive on the "inside" surface of the pole (coloured blue) and tensile on the "outside" surface (coloured red). The stresses induced by such bending can be high. The axial stress is given by the product of Young's modulus, E, and strain, *ε*. *σ = E ε = E  r / R* For example, assuming the composite to have an axial stiffness of ~ 40 GPa, the axial stresses at the inside and outside surfaces of the pole must be about 2.5 % of this, ie ~±1 GPa. Composites are able to sustain such high stresses, although it's not unknown for vaulting poles to fracture. ![](images/pole_vault_diagram.jpg) Strains induced during bending of a pole by the application of a bending moment M. Bending moments and beam curvatures = <! .style1 {color: #FF0000} > ***Bending moments*** are produced by transverse loads applied to beams. The simplest case is the cantilever ***beam*** , widely encountered in balconies, aircraft wings, diving boards etc. The bending moment acting on a section of the beam, due to an applied transverse force, is given by the product of the applied force and its distance from that section. It thus has units of N m. It is balanced by the ***internal moment*** arising from the stresses generated. This is given by a summation of all of the internal moments acting on individual elements within the section. These are given by the force acting on the element (stress times area of element) multiplied by its distance from the neutral axis, *y* . ![](images/bend_diagram.jpg) Balancing the external and internal moments during the bending of a cantilever beam Therefore, the bending moment, *M* , in a loaded beam can be written in the form \[M = \int {y(\sigma dA)} \] The concept of the ***curvature*** of a beam, κ, is central to the understanding of beam bending. The figure below, which refers now to a solid beam, rather than the hollow pole shown in the previous section, shows that the axial strain, *ε*, is given by the ratio *y* / *R* . Equivalently, *1/R* (the "curvature", κ ) is equal to the through-thickness gradient of axial strain. It follows that the axial stress at a distance *y* from the Neutral axis of the beam is given by *σ = E κ y* ![](images/bend_diagram2.jpg) Relation between the radius of curvature, R, beam curvature, κ , and the strains within a beam subjected to a bending moment. The bending moment can thus be expressed as \[M = \int {y(E\kappa ydA)} = \kappa E\int {{y^2}} dA\] This can be presented more compactly by defining *I* (the ***second moment of area***, or "***moment of inertia"***) as \[I = \int\limits\_0^{{y\_{\max }}} {} {y^2}{\rm{d}}A\] The units of *I* are m 4 . The value of *I* is dependent solely on the beam sectional shape. Click to see how *I* is calculated for two simple shapes. The moment can now be written as *M = κ E I* These equations allow the curvature distribution along the length of a beam (ie its shape), and the stress distribution within it, to be calculated for any given set of applied forces. The following simulation implements these equations for a user-controlled beam shape and set of forces. The 3-point bending and 4-point bending loading configurations in this simulation are SYMMETRICAL, with the upward forces, denoted by arrows, outside of the downward force(s), denoted by hooks ![](images/bend_diagram3.jpg) A fruitful approach to designing beams which are both light and stiff is to make them hollow. Calculation of the second moment of area for hollow beams is very straightforward, since it is obtained by simply subtracting the *I* of the missing section from that of the overall section. For example, that for a cylindrical tube is given by \[I = {I\_{{\rm{complete\;setion}}}} - {I\_{{\rm{missing\;setion}}}} = \frac{{\pi {D^4}}}{{64}} - \frac{{\pi {d^4}}}{{64}}\] Maximising the beam stiffness = The product *EI* is termed the "**beam stiffness**", or sometimes the "flexural rigidity". It is often given the symbol Σ. It is a measure of how strongly the beam resists deflection under bending moments. It is analogous to the Young's modulus in uniaxial loading (with the curvature being analogous to the uniaxial strain and the bending moment being analogous to the uniaxial stress). For a given material, the beam stiffness is maximised by maximising the value of *I* . This is done by using sectional shapes for which most of the sectional area is remote from the neutral axis. For example, a beam of square cross-section is stiffer than a circular beam with the same area, since a circle has a larger proportion of the section near the neutral axis. A hollow square section is even stiffer. Taking this rationale still further leads to I-section beams and sandwich panels. ![](images/stiff1.gif) I-beams are commonly used in construction of buildings. Sandwich panels are also in extensive use, for example in surf-boards, aircraft, skis etc Beam deflections from applied bending moments = As illustrated in the diagram below, the beam curvature, κ, is approximately equal to the second derivative (curvature) of the neutral axis line (dotted line in diagram) \[\kappa = \frac{{{d^2}y}}{{d{x^2}}}\] ![](images/moment_diagram1.jpg) The approximation involved in equating beam curvature to the curvature of the neutral axis of a beam. It follows that \[M = \kappa EI = EI\frac{{{{\rm{d }}^2}y}}{{{\rm{d }}{x^2}}}\] Since the moment at the section concerned can also be written, for a cantilever beam, as *M = F (L - x)*it follows that \[EI\frac{{{{\rm{d }}^2}y}}{{{\rm{d }}{x^2}}} = F\left( {L - x} \right)\] This second order differential equation can be integrated (twice), with appropriate boundary conditions, to find the deflection of the beam at different points along its length. For a cantilever beam, this operation is shown below. \[\begin{array}{l} EI\frac{{{\rm{d }}y}}{{{\rm{d }}x}} = FLx - \frac{{F{x^2}}}{2} + {C\_1}\\ {\rm{at }}\;x = 0, \frac{{{\rm{d }}y}}{{{\rm{d }}x}} = 0, {\rm{thus\; }}{C\_1} = 0 \end{array}\] \[\begin{array}{l} EIy = \frac{{FL{x^2}}}{2} - \frac{{F{x^3}}}{6} + {C\_2}\\ {\rm{at }}\;x = 0, y = 0, {\rm{thus\; }}{C\_2} = 0 \end{array}\] which can be rearranged to give \[y = \frac{{F{x^2}}}{{6EI}}(3L - x)\] For example, at the loaded end ( *x* = *L* ), this gives \[\delta = \frac{{F{L^3}}}{{3EI}}\] The corresponding operation for symmetrical 3-point bending can be seen by clicking . Twisting moments (torques) and torsional stiffness Torsion is the twisting of a beam under the action of a torque(twisting moment). It is systematically applied to screws, nuts, axles, drive shafts etc, and is also generated more randomly under service conditions in car bodies, boat hulls, aircraft fuselages, bridges, springs and many other structures and components. A torque, *T* , has the same units (N m) as a bending moment, *M* . Both are the product of a force and a distance. In the case of a torque, the force is tangential and the distance is the radial distance between this tangent and the axis of rotation. ![](images/twist_diagram.jpg) Torsion of a Cylindrical Bar Torsion of a cylindrical bar is illustrated in the figure. It can be seen that the shear strain in an element of the bar is given by \[\gamma = \frac{{r\;{\rm{d}}\theta }}{{{\rm{d}}L}}\] This equation applies both at the surface of the bar, as shown, and also for any other radial location, using the appropriate value of *r* . Clearly, the shear strain varies linearly with *r* , from zero at the centre of the bar to a peak value at the free surface. The shear stress, τ, at any radial location, is related to the shear strain by \[\tau = G\gamma \] where *G* is the shear modulus. It follows that \[\tau = Gr\frac{{{\rm{d}}\theta }}{{{\rm{d}}L}}\] The torque, *T* , can therefore be written as \[T = \int\limits\_A {{\rm{d}}T = } \int\limits\_A {\tau \;r\;{\rm{d}}A} = \int\limits\_A {G\;{r^2}\frac{{{\rm{d}}\theta }}{{{\rm{d}}L}}{\rm{d}}A} \] As for the beam bending case, the geometrical integral is represented as a (polar) second moment of area \[{I\_{\rm{P}}} = \int\limits\_A {{r^2}{\rm{d}}A} \] For a solid cylinder of diameter *w* , this can be written as \[{I\_{\rm{P}}} = \int\limits\_A {{r^2}} {\rm{d}}A = \int\limits\_0^{d/2} {{r^2}2} \pi r\;{\rm{d}}r = \pi \left[ {\frac{{{r^4}}}{2}} \right]\_0^{w/2} = \frac{{\pi {w^4}}}{{32}}\] The torque is thus given by \[T = G\;{I\_{\rm{p}}}\frac{{{\rm{d}}\theta }}{{{\rm{d}}L}}\] Comparing this equation with the corresponding one for beam bending *M = E I Κ* it can be seen that the torsional analogue for the *curvature* of a bent beam is the ***rate of twist*** along the length of the bar. This can be measured experimentally, although not quite so easily as a curvature (because the macroscopic shape of the bar does not actually change - at least when it is straight - see next page for an important example of a case when it is NOT straight). Springs = ![](images/coil.jpg) A collection of assorted springsAn interesting example of torsion is provided by the deformation that takes place during the loading of ***springs*** (torsional coils). Of course, these have a wide range of engineering applications. They are normally made of (high yield stress) metals. (Ceramics are too brittle, while polymers are insufficiently stiff: fibre composites are also unsuitable - see below.) When a spring is loaded (compressed or extended), the deformation experienced by the wire is one of pure torsion. This is illustrated in the diagram below. ![](images/coil2.jpg) Illustration of how the application of an axial load, F, to a spring generates torsional deformation of the wire and hence axial extension of the spring. The torque acting on the wire is given by \[T = F\left( {\frac{D}{2}} \right)\] in which *F* is the axial force and *D* is the coil diameter. It can be shown ( ) that the shear stress within the wire (at a distance *r* from the core) is given by \[\tau = \frac{{Tr}}{{2I}}\] in which *I* is the bending second moment of area (NOT the polar moment), and the shear strain in the wire is related to the change in axial extension of one turn of the coil, *s* , by the expression \[\gamma = \frac{{2sr}}{{\pi {D^2}}}\] Measurement of the extension (per turn) of a spring, as a function of the applied force (first carried out systematically by ***Robert Hooke*** , in his pioneering work on the nature of elasticity) is a very convenient method of obtaining elastic constants. The ratio of τ to γ , obtained from the above equations, gives the shear modulus, *G* . The loading geometry is such that a large axial extension (per turn) is generated, while the strains within the material remain low, particularly for springs with a large ratio of *D* to *w* . Of course, this is exactly why springs are of practical use - they accommodate large deflections or displacements without the material being strained beyond its elastic limit (which is small for all materials except rubbers). It's interesting to note why springs are not normally made of fibre composites. The natural orientation for the fibres would be along the length of the rod (wire) to be formed into a coil. However, these fibres would have very little effect on the shear modulus in a transverse section of the rod, which is the property that controls the elastic extensions of the spring. It might as well have been made solely of the polymeric matrix (although such springs have a very low stiffness). It's only by winding fibres into the hoop direction of the rod that the shear stiffness of transverse sections would be boosted. However, this is impractical, at least for anything but very large scale springs, since it would require the fibres to adopt higher curvatures than would normally be possible. Plastic deformation during beam bending = ![](images/plastic1.jpg)If the stresses within a beam exceed the elastic limit, then plastic deformation will occur. This can dramatically change the behaviour. Consider a material exhibiting elastic - perfectly plastic behaviour (ie no work-hardening), as shown below. ![](images/plastic_graph.jpg) Stress-strain curve for an elastic-perfectly plastic material. Stress and strain distributions before and after applying the moment are shown below. In the outer regions of the beam, the stress will be capped at σY, although the strain will continue to increase linearly with distance from the neutral axis, as in the elastic case. The curvature (strain gradient), *κ*, induced by a given moment, *M*, will now be greater, since this increase will be required in order to bring the internal moment back up to the level of the applied moment - i.e. bending will increase. ![](images/plastic_diagram2.jpg) Distributions of stress and strain within a beam before and after application of a moment sufficiently large to cause plastic deformation A further difference is observed on removal of the applied moment, since the beam will now retain a ***residual curvature***, *κ*res , as a result of the plastic deformation. This is due to the presence of ***residual stresses*** . The residual curvature can be calculated, using the fact that the beam is subject to no applied force. It follows that the residual stress distribution must satisfy a ***force balance*** , so that \[\int\limits\_{y = 0}^{{y\_{\rm{s}}}} {\sigma \left( y \right)} \;{\rm{d}}y = 0\] which is equivalent to the shaded areas in the diagram being equal. Since the change in stress (at any value of *y* ) on removing the applied moment is given by the change in strain at that depth times the modulus (eg = *E*   Δ*ε* at *y* = *y*s - see diagram), these equations allow the residual stress distribution to be established. The following expressions can be obtained ( ) for the thickness of the elastic core, the residual curvature, the surface residual stress and the residual stress at the limit of the elastic core. \[{y\_{\rm{e}}} = \frac{{{\sigma \_{\rm{Y}}}}}{{E\kappa }}\] \[{\kappa \_{{\rm{res}}}} = \kappa {\left( {1 - \frac{{{y\_{\rm{e}}}}}{{{y\_{\rm{s}}}}}} \right)^2}\] \[{\sigma \_{{\rm{s, res}}}} = {\sigma \_{\rm{Y}}} - E\;{y\_{\rm{s}}}\left( {\kappa - {\kappa \_{{\rm{res}}}}} \right)\] \[{\sigma \_{{\rm{e, res}}}} = {\sigma \_{\rm{Y}}} - E\;{y\_{\rm{e}}}\left( {\kappa - {\kappa \_{{\rm{res}}}}} \right)\] Of course, the picture may in practice be complicated by work hardening, more complex sectional geometries, non-prismatic beams etc, but the same principles still apply. Incidentally, it may be noted that, in addition to the force balance, the residual stress distribution in an unloaded beam must also satisfy a ***moment balance*** , so that \[\int\limits\_{y = 0}^{{y\_{\rm{s}}}} {\sigma \left( y \right)} \;y\;{\rm{d}}y = 0\] However, the symmetry of the tensile and compressive sides of the beam ensures that this condition is satisfied, so it is not involved in the solution procedure in this case. In other cases, however, in which the neutral axis is not a plane of symmetry, this condition may also need to be invoked in order to find the solution. The plastic deformation behaviour of a prismatic beam, with a symmetrical, rectangular section, made of a metal exhibiting no work hardening, can be explored using the plastic version of the beam bending simulation presented in an earlier section. Summary = You should now understand the basic principals of bending and torsion. You should be able to predict how a beam will respond elastically to a bending moment, from a knowledge of the Young's modulus, *E* , and the sectional geometry of the beam (from which the second moment of area, *I* , is derived). You should understand the relationship between the (local) bending moment, *M* , the beam stiffness (flexural rigidity), *EI* , and the resultant (local) curvature, *κ*. The concept of torsion has been introduced, with the analogue of the bending moment being a torque, *T* , and the analogue of the curvature being the rate of twist of the beam, θ / *L* . The elastic constant controlling the behaviour is the shear modulus, *G* , and the sectional geometry analogue of the second moment of area, *I* , is the polar second moment of area, *I*P. You should also have an appreciation of the nature of the stress distribution within an elastically deformed beam and you should understand that, for a metallic beam, it's possible that these stresses could exceed the yield stress, σY , so that plastic deformation could take place. In this case, there is a change in the relationship between the applied moment and the resultant curvature (so that a given increase in moment gives a larger increase in curvature). Furthermore, on removing the applied moment, the beam retains a residual curvature. Analogous phenomena can occur during torsion. These effects can be quantitiatively predicted. Some implications of these analyses for the design of components and structures subject to bending moments and torques have been briefly outlined. Questions = ### Quick questions*You should be able to answer these questions without too much difficulty after studying this TLP. If not, then you should go through it again!*1. How do the axial stresses within a vaulting pole vary with distance from the neutral axis? | | | | | - | - | - | | | a | They are zero at the neutral axis, rising to a tensile maximum at the outer surface, and a compressive maximum at the inner surface. | | | b | They are zero at the inner surface and rise linearly to a maximum at the outer surface. | | | c | They reach a maximum at the neutral axis, falling to zero at the outer and inner surfaces. | | | d | They are constant throughout the section. | 2. It is important to maximise the beam stiffness when attempting to minimise the deflection of a beam (of given mass). Which of the following shapes, all with dimensions such that they have the same cross-sectional area, will have the highest beam stiffness? | | | | | - | - | - | | | a | An I-beam | | | b | A square section beam | | | c | A hollow cylindrical beam | | | d | A circular section beam | 3. In which of the following situations is torsion occurring? | | | | | - | - | - | | | a | A tree bending in the wind. | | | b | A wire hanging under its own weight. | | | c | A screwdriver being used to tighten a screw. | | | d | The wings on an aircraft acting as cantilever beams during flight. | 4. How can the stress distribution in an elastoplastic beam undergoing bending be predicted? | | | | | - | - | - | | | a | The axial strain will vary linearly from the neutral axis to the free surfaces, and so the stress distribution should increase linearly in the same fashion. | | | b | The axial strain will vary linearly from the neutral axis to the free surfaces, and the stress distribution can be found from this information and the stress-strain curve of the material. | | | c | The stress distribution cannot be calculated theoretically and must be found by experiment. | | | d | The beam will be fully plastic and so the stress will be of constant magnitude throughout the section. This stress can be predicted from the Young's modulus and the yield strain of the material. |### Deeper questions*The following questions require some thought and reaching the answer may require you to think beyond the contents of this TLP.*5. Which of the following sectional shapes will give the highest beam stiffness? a Hollow tube of outer diameter 15 cm and inner diameter 14 cm b I-beam with a central section 11 cm high by 2 cm wide and flanges 2 cm high by 10 cm wide (giving a total height of 15 cm) 6. A solid rectangular section beam of length, *L* = 100 cm, height, *h =* 5 cm and width, *w =* 1 cm, is loaded under symmetrical 4-point bending, with 1000 N downward forces applied at 40 cm in from both ends of the bar, which is supported at both ends. A deflection of 5 mm is measured at the centre of the beam. Using these data, calculate the Young's modulus of the beam. From your answer, suggest a likely material for the beam. 7. Calculate the shear modulus, *G* , of a material supplied in the form of a hollow tube (length 100 cm, outer diameter 5 cm, wall thickness 0.1 cm), given that, when it is subjected to an applied torque of 1000 N m, an angular twist of 0.10 radians is generated. Going further = ### Books * S.P. Timoshenko, J.N. Goodier, *Theory of Elasticity,* McGraw-Hill International Editions, Third Edition, 1970. * J.M. Gere, *Mechanics of Materials* , Nelson Thornes, Fifth SI Edition, 2001. ### Websites * Simulation showing the stress distribution in 3-point bending: * Video of the 3-point bending of a brass beam: * Stress distribution In a polycarbonate bar undergoing 3-point bending: * 3-point bending simulation for a sandwich beam: * Simulation showing the stress distribution in 4-point bending: * Stress distribution In a polycarbonate bar undergoing 4-point bending: * Simulation of shear, moment and deflection on application of different forces to a beam: * Sandwich panels: * The torsional pendulum: *
Aims On completion of this TLP you should: * Understand how nucleation and crystallization occur. * Be aware of the reasons why crystallization in the cells of living systems is usually fatal. * Appreciate the importance of sugars as solutes in the cellular liquid. * Understand the ways in which some living systems avoid crystal formation. Before you start Before beginning this TLP, you should: * Understand basic concepts including . * Be familiar with phase diagrams - take a look at the TLP. * Be familiar with the formation and structure of glass. Introduction Many plants and animals live in extreme climate zones, such as the polar regions or subtropical deserts. One of the major problems associated with living in such regions is that most biological systems are largely composed of water. Water is the universal solvent for biological processes, and most living organisms contain a large amount — humans, for example, are 50-75% water. In the polar regions temperatures can fall to below –20°C. In deserts, the absence of water can lead to the dehydration of plants and animals. The organisms that live in these regions have become adapted to these conditions. Plants that live in arctic regions must contend with the formation of ice crystals in their cells. This is fatal to living tissue for a variety of reasons, which are discussed in the next section. Plants and animals living in the desert have a similar problem; dehydration can cause salts and sugars to precipitate out of solution inside their cells. In this TLP we will consider how biological adaptations to cells allow them to survive such conditions by preventing crystallization of ice or salts within cells. We will look at the basic theory behind nucleation and crystallization, and the details of the water-sucrose system that illustrate key features of the cellular liquid (the *cytosol*). We will then examine the specific ways in which some plants and animals avoid crystallization, through glass formation, extracellular crystallization (forming the crystals outside the cell) and the use of antifreeze proteins. Nucleation and crystallization Living cells consist essentially of an aqueous solution contained within a cell membrane. Thus the soft tissues of many living systems can be described as structured water. The human body for instance, is 50-75 % water. This high proportion of water means that crystallization in the body occurs in two main ways: * by *dehydration*, when minerals crystallize from a saturated solution. * by *freezing*, when ice crystals are formed. The formation of crystals in living cells is usually fatal. This is either due to a change in the ionic ratios in the cytosol or due to the bursting of the cell. During dehydration, water is removed from the cell, leading to a supersaturated solution. Mineral or sugar crystals can then form, changing the ionic ratios in the cytosol. Ice crystals can form in the cells of both plants and ectothermic (cold-blooded) animals. Since ice is essentially pure H2O, ice formation can increase the concentration of minerals in the remaining cytosol to a toxic level. The increased mineral concentration in the cytosol will cause water to be drawn in from the surrounding cells by osmosis, which can cause the cell to swell and burst. In dehydration, the crystals that are formed can puncture the cell membrane causing the cells to burst, leading to death. Both dehydration and ice formation involve the nucleation and growth of a new, solid phase from an aqueous solution. In the case of ice formation, the situation is effectively that of a solid crystallizing from a melt. In the case of the formation of mineral crystals, the case is that of precipitation from solution. Nucleation is the formation of a small cluster (or nucleus) of the new phase, and these nuclei arise spontaneously. Nuclei that are smaller than a certain size will simply disappear, but if a nucleus is greater than a certain size, it will spontaneously grow and will eventually form a grain. This critical size varies with temperature and the reasons for this are outlined below, using the example of ice forming in water. Nucleation can occur either homogeneously (nucleation in a uniform phase in which there are no inhomogeneities on which nucleation can preferentially occur) or heterogeneously (in which the new phase nucleates on an inhomogeneity). For the nucleation of ice in pure water, the transformation is a structural change only (there is no change in the chemical composition), and the change in free energy per unit volume on transformation is ΔGv. The interface between the ice and water phase has a free energy γ per unit area. Due to the random motion of the water molecules, nuclei of ice will continually form. Assuming that these nuclei are spherical with radius r, the work done in forming the nucleus is: Work for nucleation = change in free energy of bulk phases + interface energy or $$W = {4 \over 3}\pi {r^3}\Delta {G\_v} + 4\pi {r^2}\gamma $$ Since the interface between the water and liquid can be considered to be a defect, it contributes an excess energy to the system, and γ is positive. γ is approximately constant over the relevant range of temperatures. ΔGv varies with temperature (as described below), but if the transformation occurs spontaneously, (i.e. if the temperature is below the melting temperature of ice), then ΔGv is negative, and a graph of <W against r has the form: ![Graph of W vs r](images/img001.gif) So, if a nucleus is formed, which has r > r\*, it will decrease its energy by increasing r, i.e. by growing. Any nuclei with r < r\* will decrease in energy by decreasing r and by disappearing. The critical radius, r\* occurs when dW/dr = 0, giving: $$r\* = - {{2\sigma } \over {\Delta {G\_v}}}$$ and $$W\* = {{16\pi } \over 3}{{{\sigma ^3}} \over {\Delta G\_v^2}}$$ We define ΔG to be the free energy difference between the solid and liquid phases, ΔG = ΔGice - ΔGwater. Similarly we define the differences in enthalpy ΔH and entropy ΔS. Since ΔG = ΔH - TΔS and at Tm, the melting point of ice, ΔG  = 0, then ΔH = TmΔS. If ΔH and ΔS are independent of temperature, then, at temperature T, ΔG = ΔS(Tm - T) = ΔS ΔT, where ΔT is the *supercooling* (also known as *undercooling*). The critical radius and the work for nucleation therefore decrease with decreasing temperature below Tm, and the rate of nucleation would increase with temperature below Tm. This effect is limited by the decrease in atomic mobility at lower temperatures, and the actual variation of nucleation frequency with temperature is shown below: ![Graph of temperature vs nucleation frequency](images/img002.gif) However, this analysis assumes homogeneous nucleation, which occurs only rarely. Usually there are heterogeneities, such as mould walls or cell membranes, in the melt onto which nucleation preferentially occurs. These heterogeneities are points with high excess energy and so the energy required to form the interface between the existing phase and the new phase is not so significant. Removing heterogeneities is one effective way of decreasing the temperature at which ice forms, i.e. increasing the difficulty of freezing. The water-sucrose system Key features of the cytosol of cells are represented in the binary water-sucrose system. In practice, of course, cytosol compositions are much more complex. The equilibrium phase diagram for this system is shown below: ![Water-sucrose phase diagram](../../tlplib/biocrystal/images/img003.gif) Due to kinetic factors, the equilibrium states shown in this phase diagram are rarely reached. Nucleation of sugar crystals is difficult, since the complexity of the sucrose molecule and the viscosity of the liquid make getting a critical mass of molecules with the right orientation uncommon. At high concentrations of sucrose the system becomes sufficiently viscous to be considered (at lower temperatures) as a glass. The practical phase diagram is: ![Water-sucrose phase diagram](../../tlplib/biocrystal/images/img004.gif) Avoidance of crystallization by glass formation = As previously discussed, a common biological problem is dehydration. Crystal formation upon dehydration can be avoided by forming a glass. Glasses are amorphous, and form by a continuous process (no interface or solidification front is involved). Their structure is comparable to that of a liquid (some short-range order is observed, but no long-range order), but their properties are those of a solid. In response to dehydration, some living systems alter the composition of the cytosol in order to favour glass formation, for example by hydrolysis of starch to sugar. As can be seen on the sucrose-water phase diagram, the higher the sugar content, the higher the temperature at which a glass can be formed. Using this mechanism, complete dehydration can be survived. ### Examples of living systems that exploit the formation of glass in order to preserve life * **Resurrection Plant**Various different species fall in the category of "resurrection plant", including the *Rose of Jericho* and *Selaginella lepidophylla*. When dry, they appear brown and lifeless, but after rain they become moist and green. This can be seen in the image below, which shows the *Selaginella lepidophylla* plant in both the moist and dry states.> > ![Photographs of ](images/img005.jpg) > > > Image reproduced with permission of Brad Fiero, Pima Community College, Tucson, Arizona > > > * **Flatworms** Under the stress of dehydration, some species of simple flatworm are able to convert starch into sugars, which promotes the formation of glass. * **Seeds** Sucrose is the most abundant sugar within mature seeds, and its ability to form a glass in dry tissues greatly slows the chemical reactions within the seed that could lead to its degradation and hence contributes to the longevity of seeds. When seeds are planted after storage in the dry glassy state, they take up moisture from the ground and become rehydrated, and are then able to germinate. Glass formation is similarly exploited in dried foods such as pasta, drug storage for drugs such as insulin for inhalation, and organ preservation. In these cases, the glassy state is used to inhibit degradation. Avoidance of crystallization by freeze resistance = Many organisms exist in habitats where the temperatures fall below the freezing point of water. As previously explained, the formation of ice crystals in cells is lethal and many species have therefore evolved to prevent ice crystals forming in cells. There are two different types of resistance to freezing temperatures: *freeze avoidance* and *freeze toleration.* ### Freeze avoidance During freezing of water ice crystals nucleate and grow. For pure water, homogenous nucleation occurs at ~ 40 K beneath the thermodynamic freezing point. This is a substantial supercooling, but in most cases, freezing occurs above –40°C due to heterogeneous nucleation. One way to avoid freezing is to discourage heterogeneous nucleation. Some frost-hardened woods achieve this by dispersing the water in cells, and by the lack of nucleation on the cell walls. As a result, the water in their cells does freeze at –40°C. Fish, insects and some plants that live in arctic regions have evolved to produce *antifreeze proteins*, which inhibit the growth of ice crystals by adsorption to the ice surface. Adsorption of these antifreeze proteins prevents crystal growth on the primary growth directions, forcing growth to occur parallel to the secondary axes. This inhibits the formation of stable ice crystals and lowers the kinetic freezing temperature. ### Freeze toleration An alternative to freeze avoidance is to promote the freezing of **extracellular** liquid. This protects the cells in two ways: * By the release of latent heat into the cells, which prevents their temperature from decreasing. * By drawing water out of the cells and decreasing the temperature at which ice forms. This depression of the liquidus line at higher sucrose concentrations can be seen in the sucrose-water phase diagram. * Ultimately dehydration of the cells leads to glass formation. Many biological systems promote heterogeneous nucleation by the presence of a variety of Ice Nucleating Agents (INAs). These INAs may either be adaptive or incidental. Adaptive INAs, which are discussed below, are present in order to promote heterogeneous nucleation, whereas incidental INAs (for example, features such as cell walls) promote heterogeneous nucleation only as a normally unwanted side effect. Some organisms have evolved to produce adaptive INAs, which nucleate ice crystals between the cells. These INAs can reduce the nucleation supercooling to as little as 1°C. Examples of these are the giant rosette plant (*lobelia telekii*) which grows on Mount Kenya, and the northern wood frog (*rana sylvatica*) which lives in Canadian forests. The northern wood frog's body contains 35-45% ice during the winter months. Adaptive INAs are generally large proteins with molecular weights of up to 30,000 atomic mass units. The amino acids within the proteins are ordered, forming a template for ice. Thus a thin layer of ice can always form on the surface of an INA. However, this will not lead to spontaneous ice growth unless the INA is of a certain critical size. If the INA is assumed to have a circular surface with radius R, then free ice growth will occur only when R is greater than or equal to r\*. The critical radius, r\*, is given by the equation: \[r\* = - {{2\gamma } \over {\Delta {S\_v}\Delta T}}\] where γ is the interfacial energy per unit area, and ΔSvΔT is the free energy of solidification per unit volume. Nucleation will occur when the supercooling ΔT satisfies the condition: \[\Delta T \ge - {{2\gamma } \over {\Delta {S\_v}R}}\] A larger INA (greater R) therefore gives a smaller required supercooling and a higher nucleation temperature. Summary = In this TLP, you have learnt how the formation of crystals can occur inside cells by cooling or dehydration, and why this is usually fatal to them. The basic theory behind nucleation and crystallization has been introduced, and the water-sucrose system has been described as an approximation to the composition of cell cytosol. You should appreciate that some plants and animals have adapted in order to avoid crystallization. The three main ways of achieving this are: * through the formation of a glass. * through extracellular crystallization (formation of the crystals outside the cell membrane). * through the use of antifreeze proteins. Questions = ### Deeper questions*The following questions require some thought and reaching the answer may require you to think beyond the contents of this TLP.*1. Crystallization is usually fatal to cells because: | | | | | | - | - | - | - | | Yes | No | a | Crystals can puncture cell walls | | Yes | No | b | The formation of crystals changes the ionic ratios in the cytosol | | Yes | No | c | Crystallization speeds up the cell reactions | | Yes | No | d | Ice formation causes water to be drawn into the cell by osmosis causing the cell to swell and burst | | Yes | No | e | Nucleation of the crystals requires large amounts of energy | 2. Ice formation in cells can be limited by: | | | | | | - | - | - | - | | Yes | No | a | The presence of antifreeze proteins | | Yes | No | b | The dispersion of water in the cellular structure, which limits heterogeneous nucleation | | Yes | No | c | The formation of starch from sugar | | Yes | No | d | The introduction of Ice Nucleating Agents (INAs) into the cells | | Yes | No | e | The freezing of extracellular water | 3. Crystallization of minerals in the cells can be limited by: | | | | | | - | - | - | - | | Yes | No | a | The presence of anti-freeze proteins | | Yes | No | b | The hydrolysis of starch to sugar | | Yes | No | c | The preferential formation of a glass | | Yes | No | d | The increased activity of the Golgi Apparatus | 4. In some alpine plants, extracellular ice formation occurs at around –2°C. What is the critical radius for ice nucleation at this temperature? In these plants, the extracellular ice forms on ice nucleating agents. If an ice nucleating agent is a circular disc and is a perfect template for ice, what size must it be for nucleation to occur at –2°C?(For the ice-water interface, the interfacial energy *γ* = 0.028 J m-2; the latent heat of freezing of ice is Δ*H*v = -3.34 x 108 J m-3.) Going further = ### Books * Fletcher, *Chemical Physics of Ice*, CUP,1970. * Franks F, *Biophysics and Biochemistry at Low Temperatures*, CUP, 1985. * Greenwood G W, Greer A L, Herlach D M, & Kelton K F, *Nucleation Control*, Phil. Trans. Royal Society (A 361, no 1804, 15 Mar 2003), pp. 403-633. ### Websites * An online course in cryobiology from the University of Calgary. Particularly useful are: chapter 1, which describes biological cells; chapter 6, on freezing as a crystallisation process; and chapter 12, on animal strategies for surviving in cold climates. * A page describing freeze avoidance in plants on the website. * This page on the Union County College website describes the *Selaginella lepidophylla* resurrection plant. * This section from the online manual for the MATTER project's Materials Science on CD-ROM, gives more in-depth information about nucleation. The page refers to the nucleation of metals, but the same applies for nucleation of ice or minerals in cells. * This page on the Animal Diversity website of the University of Michigan Museum of Zoology describes the freezing capabilities of the Northern Wood Frog.
Aims On completion of this TLP you should: * Appreciate that many materials in living systems do not show Hookean elasticity. * Understand the concept of J-curves and be familiar with examples of biomaterials that show this behaviour, in particular arterial wall. * Understand how aneurysms arise in hardened arteries. * Appreciate that some biomaterials, for example hair and spiders' silk, exhibit hysteresis, and understand how this can be essential to their performance and applications. Before you start Before beginning this TLP, you should understand the elasticity of rubber, including the concept of an entropy spring. Take a look at TLP. You should also be aware of how a tensile test is carried out. Take a look at the TLP. Introduction As you will already know, many engineering materials, such as metals, show Hookean elasticity in which the tensile stress applied to a sample is directly proportional to the resultant strain. Within the range of Hookean elasticity, the stress-strain curves on loading and unloading are identical. Such linear elasticity is the usual assumption in engineering design. However, the elasticity of most materials in living systems is much more complicated. In this TLP, you will learn that some biomaterials exhibit non-linear stress-strain curves. Mammalian skin, for instance, exhibits a *J-shaped stress-strain curve,* as do healthy arterial walls. Materials with J-shaped curves are usually tough, and have other advantages. For example, materials with different elastic behaviour such as S-shaped stress-strain curves are prone to elastic instabilities such as aneurysms when used for tubes under pressure. This TLP also discusses viscoelasticity. Many biomaterials show time-dependent stress-strain curves. Associated with this, the loading and unloading curves do not superimpose on each other. Although deformation is elastic (i.e. recoverable), energy is absorbed during the deformation. This is particularly important in spider silk: when a fly hits the web its energy should be absorbed by the deformation of the web. If spider silk showed elasticity without energy absorption, then the web instead would act as a trampoline! Hookean elasticity For many materials loaded in uniaxial tension, the tensile on the material, *σ*, is directly proportional to the tensile , *ε*. ![Diagram of a sample loaded in uniaxial tension](../../tlplib/bioelasticity/images/img001.gif) A sample loaded in uniaxial tension The linear relationship between stress and strain is known as Hooke's Law, σ ∝ ε The constant of proportionality in this equation for simple tension is the Young Modulus of the material, *E*: \[E = \frac{\sigma }{\varepsilon }\] The Young Modulus of a material has values ranging from approx. 0.01 GPa for rubbers to approx. 1000 GPa for diamond. Hooke's Law further states that the stress response of a material is independent of time and that the strain of a material disappears completely on removal of the applied stress (i.e. a Hookean material shows ). This leads to a linear stress-strain curve with a gradient of *E*. Loading and unloading occur along the same curve. ![A stress-strain curve for a Hookean material](../../tlplib/bioelasticity/images/img002.gif) A stress-strain curve for a Hookean material Most materials are Hookean only at small strains (typically less than 1%). Metals, for which fully elastic behaviour is only for very small strains (typically <0.2%), show Hookean behaviour. In this region, the extension is usually both linear and recoverable. At larger strains, extension is non-Hookean (i.e. either non-recoverable, or non-linear, or both). Although many materials used in engineering applications show Hookean behaviour, only a few biomaterials approximate to it (wood and bone being the two most common). Many biomaterials exhibit a J-shaped stress-strain curve, but firstly, we shall consider the S-shaped stress-strain curve seen in rubbery materials. S-shaped curves = S-shaped stress-strain curves occur in rubbery materials (lightly cross-linked polymers). Materials with S-shaped stress-strain curves are particularly susceptible to elastic instabilities, which are of interest in analysing phenomena such as aneurysms in arteries. The curves have the form: ![An S-Shaped stress-strain curve](../../tlplib/bioelasticity/images/img003.gif) An S-shaped stress-strain curve The initial part of this curve, where the stiffness decreases with increasing load can be predicted theoretically, by considering the rubber as an entropy spring, (shown in detail in TLP). This treatment assumes that all extension occurs via conformational changes (i.e. that there is no bond stretching) and also assumes that the chain is composed of a series of joined links which are equally likely to lie in any direction (the random walk assumption). Using the random walk assumption, the probability distribution for the end-to-end length of a polymer chain with a certain number of links can be obtained, and the Boltzmann expression (S = k lnW ) can be used to determine the change in entropy on extending the chain and moving it to a less probable conformation. The total entropy change can be found by multiplying the result for one chain by the total number of chain segments Nt. If we assume that the extension occurs without significant change in the enthalpy, pressure, volume or temperature, then the change in Gibbs free energy (ΔG) can be related to the entropy change (ΔS) via the equation ΔG ≈ -TΔS.  If deformation occurs only in the x direction then the force required for deformation is \[F = \frac{{\partial \Delta G}}{{\partial x}}\] and an expression for the stress required for a given extension (with the rubber in uniaxial tension) can be obtained. If N is the number of chain segments per unit volume and λ is the of the rubber, then the nominal stress for a rubber loaded in uniaxial tension is: \[\sigma = kTN\left[ {\lambda - \frac{1}{{{\lambda ^2}}}} \right]\] where k is the Boltzmann constant and T is the temperature. This gives a stress strain curve of the form: ![The theoretically predicted stress-extension curve for rubbery materials](../../tlplib/bioelasticity/images/img004.gif) The theoretically predicted stress-extension curve for rubbery materials Thus the theory predicts that the stiffness of rubber varies a little (particularly at low extensions), but that it tends to a limiting value at higher extensions. At lower extensions, the theoretical stress-extension curve is fairly similar to the S-shaped curves obtained experimentally. At extensions of λ = 4 the experimental and theoretical curves diverge, with much larger stiffness seen experimentally than predicted theoretically. This occurs because at larger extensions the assumptions of the model are no longer valid: the polymer chains are mostly aligned with the applied stress and so applying higher stress stretches strong intra-molecular bonds. Aneurysms = Hardened and weakened arteries can show elastic instabilities such as aneurysms. A hardened artery can be modelled as a long cylindrical balloon, with radius r and internal pressure P. Consider the tension, T, in the polymer sheet as a function of the extension ratio, λ. The advantage of using T is that it allows us to ignore changes in the thickness of the polymer sheet, which may be considerable at large λ. The units of T are force/length. ![Diagram of polymer sheet in tension](../../tlplib/bioelasticity/images/img005.gif) For a cylinder of radius r, the pressure is related to the tension in the hoop direction, Thoop, by: \[P = \frac{{{T\_{{\rm{hoop}}}}}}{r}\] An increased hoop tension for a given radius, or a decreased radius for a given hoop tension gives an increased pressure. A plot of Thoop against λ for the balloon rubber would show a characteristic S-shaped curve, similar (but not identical) in shape to the stress - strain (σ — ε) curve: ![Graph of hoop tension vs extension ratio for balloon rubber](../../tlplib/bioelasticity/images/img006.gif) The gradients of the dashed lines are proportional to the pressure inside the balloon at the values of extension ratio (or radius) at which they cross the curve. This allows us to obtain the curve showing the variation of pressure with extension ratio: ![Graph of pressure vs extension ratio](../../tlplib/bioelasticity/images/img007.gif) Although the tension in the balloon only ever rises with *λ*, it can be seen that this is not the case with pressure. The fact that there are regions of the graph for which \[\frac{{{\rm{d}}P}}{{{\rm{d}}\lambda }} < 0\] allows the occurrence of different radii at equal pressures, which leads to the formation of aneurysms. This behaviour can be observed in modelling balloons, as shown in the next section. Now consider Hookean behaviour: ![Graph of tension vs extension ratio for Hookean material](../../tlplib/bioelasticity/images/img008.gif) This then gives: ![Graph of pressure vs extension ratio](../../tlplib/bioelasticity/images/img009.gif) In this case, there is no region of \[\frac{{{\rm{d}}P}}{{{\rm{d}}\lambda }} < 0\] (although the gradient does tend to zero at large extension ratios). Thus if an artery exhibited Hookean elasticity, it should theoretically be stable against aneurysms, but only just, as the pressure inside the tubes is almost independent of its radius. However, since a normal artery exhibits natural variations along its length, Hookean behaviour in the arterial wall would not provide a strong safeguard against aneurysms. In fact, arteries exhibit J-shaped stress-strain curves as described later. For J-shaped curves *dP/d λ  is again always greater than zero but now does not tend to zero at large extension ratios. This provides greater stability against aneurysms.* Demonstration of aneurysms As mentioned on the previous page, the behaviour of a modelling balloon can be used to provide a simple demonstration of an aneurysm. This is shown in the series of photographs below. Initially, before any air is pumped into the balloon, it is completely deflated: | | | | - | - | | Graph of pressure vs strain | (Click on image to view larger version) | As the balloon is slowly inflated, it initially has one stable radius: | | | | - | - | | Graph of pressure vs strain | (Click on image to view larger version) | As air is blown into the balloon, the pressure continues to increase. When the balloon has an internal pressure equal to that at the maximum in the pressure-strain curve, an instability is introduced. The balloon develops an aneurysm, which greatly increases the internal volume and so the pressure goes down. The aneurysm grows to give a shape which is stable. ![Diagram of aneurysm](../../tlplib/bioelasticity/images/img014.gif) For the cylinder of small and large radius, and for the transition between the two, the local conditions of *T* and the curvature of the rubber are such as to correspond to a uniform internal pressure. Although the pressure is constant along the length of the balloon, two cylindrical radii are stable and so aneurysms are formed. ![Graph of pressure vs strain](../../tlplib/bioelasticity/images/img015.gif) | | | | - | - | | Graph of pressure vs strain | (Click on image to view larger version) | As more air is forced into the balloon the pressure within the balloon remains constant and the two radii stay the same size. The extra air is accommodated by the lengthening of the part of the balloon with greater radius. The lengthened aneurysm can also be split into two. | | | | - | - | | (Click on image to view larger version) | (Click on image to view larger version) | Aneurysms that form in arteries do not usually increase in length but instead increase spherically in radius until they burst. This is because the pressure-strain curve for hardened arteries does not have a local minimum after the maximum, but instead decreases continuously from the maximum. Once the balloon is completely inflated, it again has one stable radius. Further increases in pressure lead to uniform expansion. | | | | - | - | | Graph of pressure vs strain | (Click on image to view larger version) | The pressure inside the balloon as it inflates can be measured using a manometer. The balloon was fixed to the apparatus shown, and a pump was used to inflate the balloon at a constant rate. ![Photograph of manometer being used to measure pressure inside balloon](../../tlplib/bioelasticity/images/img022s.jpg) As shown in TLP, the difference in height of the two menisci in the manometer is proportional to the pressure inside the balloon. The videos below give a qualitative idea of how the pressure varies as the balloon is blown up. The pressure drop as the aneurysm forms and the constant pressure as the aneurysm lengthens are particularly obvious. The red lines in the video and on the picture show the initial position of the menisci. Your browser does not support the video tag. Video showing the inflation of balloon on the manometer Your browser does not support the video tag. Video giving a close-up of the manometer as the balloon expands J-shaped curves = ### Descriptions of J-Shaped Curves Many biomaterials exhibit the following type of stress-strain curve, which is known as a J-shaped curve: ![A J-shaped stress-strain curve](../../tlplib/bioelasticity/images/img023.gif) A J-shaped stress-strain curve The curve shows that initially, small increases in stress give large extensions, however, at larger extensions the material becomes stiffer, and more difficult to extend. Mammalian skin and flesh are two of the many biomaterials that exhibit a J-shaped curve. If you pinch your earlobe and try to pull it downwards, you will see that initially it is quite easy to stretch the earlobe, but that at larger extensions it becomes more difficult to stretch. Loading and unloading occurs along the same curve, i.e. the loading is completely reversible and elastic. This ensures that all the energy used in extending the system is returned once the load is removed. It is clearly important that there should not be too much energy absorption in arterial walls. The elastic properties of arterial wall are important not only to protect against aneurysms, but to smooth out variations in blood pressure and blood flow rate. ### Advantages of J-shaped curves for biomaterials J-shaped stress-strain curves cause biological membranes to be extremely tough, even though the fracture energy for many materials is not particularly high (around 10 kJ m-2). This toughness arises for the following reasons: * The lower part of the J-shaped curve gives very large extension for low applied stress, so the shear modulus in this region is very low and so there is no mechanism whereby the released strain energy on fracture can be transmitted to the fracture zone. * The material gets stiffer as the failure point approaches ensuring that very large extensions require large stresses, so that extensions which are likely to cause harm occur infrequently. * Since the J-shaped curve is concave, the area under the curve up to a given extension is far lower than that for the equivalent Hookean curve meaning that the energy released in the fracture of a material with a J-shaped stress-strain curve is far lower than the energy released when an equivalent Hookean material fails. Since the release of energy drives crack propagation, a material that releases less energy on fracture is tougher. * The J-shaped stress-strain curve does not lead to the elastic instabilities such as aneurysms which arise with S-shaped curves. Hence J-shaped curves are favoured for arteries. A J-shaped curve can be compared to an S-shaped curve in which the material has been pre-stressed, causing the effective origin of the graph to be further along the curve. This is shown below. Arteries are naturally pre-stressed as even the minimum blood pressure must be sufficiently positive to raise blood to the highest point of the body. ![Graph of stress vs strain](../../tlplib/bioelasticity/images/img024.gif) ### How Does the J-Shaped Curve Arise? The J-shaped curve can arise at a number of different structural levels. For a J-shaped curve to arise, there must simply be the progressive recruitment of strain-resistant components. This can occur by the progressive alignment of polymer chains with the stress, and indeed, the stress-strain curve of a rubber is (at higher extensions) J-shaped. Many biomaterials are pre-stressed rubbery materials (i.e. in their neutral position they are still under tension). These rubbery materials, by virtue of being pre-stressed, show only J-shaped behaviour. A similar effect occurs in materials that contain fibres in a soft matrix. Initially the stress acts only against the soft matrix, but with time the fibres align in the direction of the stress and so further pulling works against the stiffer fibres. This effect is used in arterial walls, where the collagen fibres act as the stiffer fibres. (.) ### Examples of Materials With J-Shaped Stress-Strain Curves Textile materials provide some non-biological examples of materials with J-shaped stress-strain curves. Knitted materials and woven fabrics pulled at 45° to the warp and weft have J-shaped curves and are thus quite tough. Knitted fabrics usually fail by unravelling not by tearing, and when woven fabrics tear it is usually along the warp and weft directions, even though these directions are those in which the force is aligned with the strong threads. Many biological soft tissues, show J-shaped stress-strain curves. Skin and arterial walls have already been cited as examples. Viscid spider silk (the capture threads) is a further example. At low strains, both collagen and tendon show J-shaped curves, in fact, the toughness of raw meat is due to the presence of collagen fibres within the meat. On cooking, the collagen disintegrates, and the meat becomes tender. Viscoelasticity and hysteresis Some biomaterials show a time-dependent elastic behaviour. Although, in the elastic regime, the strain is recoverable, the stress-strain curve is not the same for loading and unloading. Such materials instead exhibit *viscoelasticity*, involving both elastic and viscous components, which at normal loading and unloading rates leads to *hysteresis*. A typical hysteresis curve is shown below, and the energy absorbed during one loading-unloading cycle is given by the area within the loop. The shape of the loop depends on the rates of loading and unloading (unlike normal time-independent elasticity). ![Graph of stress vs strain showing loading and unloading curves](../../tlplib/bioelasticity/images/img025.gif) ### Hair The effect of hysteresis can clearly be seen in hair. Hair consists of keratin, which is a type of protein. There are two main forms of keratin: α-helices, which are found in hair, and β-sheets. As the names suggest, α-helical keratin (also known as α-keratin) contains keratin molecules arranged in helices and held in place by hydrogen bonds, whereas β-sheet keratin (also known as τ-keratin) contains keratin molecules arranged in flat sheets, in which adjacent molecules are antiparallel due to more favourable interactions between side-groups. As hair is stretched, the hydrogen bonds between the α-helices rupture, causing the helices to unravel to form β-sheets. The curve for human hair, and a curve obtained for a horse hair from a violin bow are shown below. | | | | - | - | | Stress-strain curve of a human hair Stress-strain curve of a human hair (*Structural biomaterials*, Vincent) | Measured stress-strain curve of a horse hair from a violin bow Measured stress-strain curve of a horse hair from a violin bow | When the stress on the hair is removed, the helices re-form over time, and since the unravelling of α-helices to form β-sheets is a high-energy-absorbing process, a large area is contained within the hysteresis loop. As a result, this energy is unavailable for fracture, giving a high toughness. This is particularly important for hooves and horns, which are also made up of keratin. The following graph shows experimentally measured stress-strain curves for human hair, dry horse hair (as used in violin bows) and replasticized horse hair (the same type of horse hair but soaked in water). ![Stress-strain curves for samples of hair](../../tlplib/bioelasticity/images/img028.gif) As you can see, the replasticized horse hair requires less force to stretch than the dry horse hair (for the same extension). This is because the presence of the water allows the unravelling and reravelling of the helices to occur more easily. If the human hair had been stretched to a higher strain, its hysteresis curve would look more like this: ![Graph of stress vs strain showing hysteresis curve](../../tlplib/bioelasticity/images/img029.gif) ### Spiders silk There are two main types of silk: spiders silk and the silk produced by silk worms. Both types have very good mechanical properties, are durable and readily available, and both form fine threads (typically about 1  μm in diameter) that are biodegradable and biocompatible, making them ideal for use as suture materials. The two silks are proteins and are very similar in composition: they are made up of similar proportions of the same amino acids. However, the sequence of these amino acids in spiders' silk is much less regular than that from silk worms, impeding the formation of crystallites (*micelles*) and making it much more extensible. Spiders produce silk in several different glands, each of which produces silk for a different purpose (wrapping prey, producing drag lines, forming frame threads and capture threads etc). Slightly above the concentration of protein found in the gland, the silk proteins form a liquid crystalline phase. This liquid crystalline phase is formed as the silk passes through the duct leading to the spinnaret, and the silk crystallizes as it passes through the spinnaret itself to form an insoluble β-sheet. The higher the draw ratio through the spinnaret, the higher the orientation of the fibres. *Frame threads* contain well-aligned molecules, and are highly crystalline, dry and relatively thick. In contrast, *capture threads* contain less-aligned molecules, and are less crystalline, plasticised, relatively thin, and exist as coiled fibres within an 'aqueous glue' layer. The *coefficient of restitution*, or *resilience*, describes the fraction of energy returned elastically and can vary by a very large amount, as seen in the table below. | | | | | - | - | - | | Resilin | Found in the wing hinges of insects | 97% | | Collagen | Found in tendons, ligaments, skin etc. | 93% | | Elastin | The main elastic protein of vertebrates | 76% | | Viscid silk | Capture threads | 35% | Capture threads are made from a material known as *viscid silk* and are used to capture prey. Viscid silk has a very low coefficient of restitution, making it ideal for absorbing the energy of an insect flying into the web (instead of catapulting it away!). Its high strength also helps to ensure that the web is not destroyed upon impact. The hysteresis curve for viscid silk is shown below. ![Graph of stress vs strain showing hysteresis curve](../../tlplib/bioelasticity/images/img031.gif)### Silk from silkworms![Photograph of silkworm starting spinning a coccoon](images/silkworm_spinner2sm.jpg) silkworm starting spinning a cocoon © Michaal Cook Silk is used in nature by silk worms to form their cocoons, and so must be strong and not easily breakable, as this would kill the silkworm, preventing it maturing into a moth and reproducing. Silkworms originate from China, India and Japan, and have been used by humans to make silk since at least 3,000 BC. Although silkworms only live for two months, they manage in this time to eat roughly 30,000 times their initial weight. It is estimated that 2,500 to 3,000 cocoons are needed to make just one yard of silk fabric, so despite silk being an excellent material for making fibres it is also expensive to produce. Summary = In this TLP the elastic properties of biological materials were studied. You will have learnt that few biological materials exhibit Hookean behaviour either because they do not have linear stress-strain curves, or because they do not have reversible stress-strain curves. Many biomaterials show J-shaped curves and the reasons why these arise were discussed, in terms of material structure. The S-shaped curve has been covered, and concepts such as the entropy spring introduced in TLP have been revised. You will have seen that S-shaped stress-strain curves lead to elastic instabilities such as aneurysms, and that this can be demonstrated experimentally, using cylindrical balloons. Horse hair, human hair and spider silk are examples of materials which show viscoelasticity and tensile tests were carried out on human hair and horse hair. You should now be aware of the structure of hair and spider silk and the reasons for their low coefficient of restitution. Questions = ### Quick questions*You should be able to answer these questions without too much difficulty after studying this TLP. If not, then you should go through it again!*1. What type of stress-strain curve does a normal arterial wall show? | | | | | - | - | - | | | a | Hookean | | | b | S-shaped | | | c | J-shaped | 2. What type of stress-strain curve does natural rubber show? | | | | | - | - | - | | | a | Hookean | | | b | S-shaped | | | c | J-shaped | 3. What type of stress-strain curve do metals at small extensions show? | | | | | - | - | - | | | a | Hookean | | | b | S-shaped | | | c | J-shaped | 4. What type of stress-strain curve does bone show? | | | | | - | - | - | | | a | Hookean | | | b | S-shaped | | | c | J-shaped | 5. What type of stress-strain curve does skin show? | | | | | - | - | - | | | a | Hookean | | | b | S-shaped | | | c | J-shaped |### Deeper questions*The following questions require some thought and reaching the answer may require you to think beyond the contents of this TLP.*6. Estimate the coefficient of restitution (sometimes known as the resilience) of a human hair using the hysteresis curve below:![Stress-strain graph showing hysteresis curve for human hair](../../tlplib/bioelasticity/images/img032.gif) 7. Why are hairs used in violin bows? 8. A fly of mass 30 mg hits a spider's web at 0.8 m s-1 and is stopped by two capture threads each of diameter 0.7 micrometres and length 5 cm. Calculate the maximum tensile strain in the threads, assuming that the tensile loading curve of the capture threads can be approximated as a straight line with gradient 0.1 GPa. 9. (.. continuation from previous question) Given that the coefficient of restitution of a capture thread is 35%, calculate the total energy dissipated in the two threads. 10. (.. continuation from previous question) Since the threads are very thin, this energy would easily be dissipated as heat to the surrounding air. However, supposing that the process is adiabatic (no heat transfer occurs), estimate the temperature rise that would occur. The density of viscid silk is 1500 kg m-3 and the specific heat capacity is 1400 J K-1 kg-1. 11. (.. continuation from previous question) Comment on the effects of using cables with a large diameter made of viscid silk, for example to decelerate aircraft landing on carriers. Going further = ### Books * S Vogel, *Comparative Biomechanics: Lifes Physical World*, Princeton University Press, 2003. * JE Gordon, *The Science of Structures and Materials*, Scientific American Books, 1988. * JFV Vincent, *Structural Biomaterials*, Princeton University Press, 1990. ### Websites * A site maintained by Richard Bonser about the mechanical properties of keratin, with particular reference to the mechanical properties of feathers. * An article on the American Museum of Natural History website.
Aims On completion of this TLP you should be able to: * Understand the biological structure of bone * Consider bone as an engineering material, its description as a composite and its mechanical properties * Understand the requirement for hip replacements and the issues involved in selecting implant materials * Select suitable materials for use in hip replacements Before you start Before reading this TLP, you should be familiar with: * Youngs Modulus (**E**, Pa), Ultimate Tensile Strength (**UTS**, Pa), Fracture Energy (**Gc**, J m-2) * The concept of composite materials * Use of Ashby Materials Selection Maps (See the TLP ) Introduction The properties of biomaterials are very impressive, and in many cases can be compared directly to those of man-made materials, such as in the use of wood for engineering applications, and the use of silk for making light, strong rope. This is all the more remarkable if we consider that biomaterials are entirely self-assembling and form at near ambient temperatures. Biomaterials are ideally suited to the function that they perform, often aided by a complex hierarchical structure. In many cases this leads to high degrees of anisotropy in the structure and the resultant properties. A good example of this is seen in the structure of bone. This TLP aims to show you how bone, specifically a human femur, is adapted to the bodys needs and its structural functions. It describes the biology of bone and also considers it as an engineering material. The second part of the TLP looks at hip implants. Over 50,000 hip replacement operations are performed in the UK each year, and so continues to be a very important area of research and development. This TLP looks at why hip implants are necessary, what the potential problems are with these implants, and considers what properties are necessary for each individual component to arrive at suitable materials for the implant. Structure and composition of bone = Long bones such as the femur contain two distinct morphological types of bone: * Cortical (compact) bone * Cancellous or Trabecular (spongy) bone These are shown in the figure below. ![Image of bone morhology](images/bone_macro.jpg) Diagram of distinct morphological types of bone Cortical bone forms a dense cylinder down the shaft of the bone surrounding the central marrow cavity. While cortical bone accounts for 80% of the mass of bone in the human body, it has a much lower surface area than cancellous bone due to its lower porosity. Cancellous (or trabecular) bone is located at the ends of long bones, accounts for roughly 20% of the total mass of the skeleton, and has an open, honeycomb structure. It has a much lower Youngs modulus than cortical bone, and this graded modulus gradually matches the properties of the cortical bone to the cartilage that forms the articulating surface on the femoral head. Composition - Bone itself consists mainly of collagen fibres and an inorganic bone mineral in the form of small crystals. *In vivo* bone (living bone in the body) contains between 10% and 20% water. Of its dry mass, approximately 60-70% is bone mineral. Most of the rest is collagen, but bone also contains a small amount of other substances such as proteins and inorganic salts. Collagen is the main fibrous protein in the body. It has a triple helical structure, and specific points along the collagen fibres serve as nucleation sites for the bone mineral crystals. This is shown in the animation below. The composition of the mineral component can be approximated as hydroxyapatite (HA), with the chemical formula Ca10(PO4)6(OH)2. However, whereas HA as has a Ca:P ratio of 5:3 (1.67), bone mineral itself has Ca:P ratios ranging from 1.37 - 1.87. This is because the composition of bone mineral is much more complex and contains additional ions such as silicon, carbonate and zinc. Cartilage is a collagen-based tissue containing very large protein-polysaccharide molecules that form a gel in which the collagen fibres are entangled. Articular, or hyaline, cartilage forms the bearing surfaces of the movable joints of the body. Mechanically, articular cartilage behaves as a linear viscoelastic solid. It also has a very low coefficient of friction (< 0.01), largely attributed to the presence of synovial fluid that can be squeezed out upon compressive loading. The animation below allows you to explore the microstructure of cortical bone. Stresses Bones such as the femur are subjected to a bending moment, and the stresses (both tensile and compressive) generated by this bending moment account for the structure and distribution of cancellous and cortical bone. In the upper section of the femur, the cancellous bone is composed of two distinct systems of trabeculae. One system follows curved paths from the inner side of the shaft and radiates outwards to the opposite side of the bones, following the lines of maximum compressive stress. The second system forms curved paths from the outer side of the shaft and intersects the first system at right angles. These trabeculae follow the lines of maximum tensile stress, and in general are lighter in structure than those of the compressive system. The thickness of the trabeculae varies with the magnitude of the stresses at any point, and by following the paths of the principal compressive and tensile stresses they carry these stresses economically. The greatest strength is therefore achieved with the minimum of material. The distribution of the compact bone in the shaft is also due to the requirement to resist the bending moment stresses. To resist these stresses, the material should be as far from the neutral axis as possible. A hollow cylinder is the most efficient structure, again achieving the greatest strength with the minimum of material. ![Diagram showing computed lines of constant stress from the analysis of various transverse sections](images/stressInFemurHead_updated.png) Diagram showing computed lines of constant stress from the analysis of various transverse sections Formation and remodelling of bone = Bone formation is an essential process in the development of the human body. It starts during the development of the foetus, and continues throughout childhood and adolescence as the skeleton grows. Bone remodelling meanwhile is a life-long process, consisting of *resorption* (the breaking down of old bone) and *ossification* (formation of new bone), and is key to shaping the skeleton and to the repair of bone fractures. There are three types of cell present in bone that are of particular interest – osteoblasts, osteocytes and osteoclasts, which are respectively responsible for the production, maintenance and resorption of bone. * **Osteoblasts** Mononucleated “bone-forming” cells found near the surface of bones. They are responsible for making *osteoid*, which consists mainly of collagen. The osteoblasts then secrete alkaline phosphatase to create sites for calcium and phosphate deposition, which allows crystals of bone mineral to grow at these sites. The osteoid becomes mineralised, thus forming bone. * **Osteocytes** These are osteoblasts that are no longer on the surface of the bone, but are instead found in lacunae between the lamellae in bone. Their main role is homeostasis – maintaining the correct oxygen and mineral levels in the bone. * **Osteoclasts**  Multinucleated cells responsible for bone resorption. They travel to specific sites on the surface of bone and secrete acid phosphatase, which unfixes the calcium in mineralised bone to break it down. During foetal development there are two mechanisms for creating bone tissue: * Endochondral ossification * Intramembranous ossification Intramembranous ossification occurs in the formation of flat bones such as those in the skull, and will not be covered further here. More information can be found through the page. **Endochondral ossification** This involves bone growth from an underlying cartilage model, and is seen in the formation and growth of long bones such as the femur. The initial step involves the development of a cartilage model, which has the rough shape of the bone being formed. In the middle of the shaft is the primary ossification centre, where osteoblasts lay down osteoid on the shaft to form a bone collar. The osteoid calcifies, and blood vessels grow into cavities within the matrix. Osteoblasts then use the calcified matrix as a support structure to lay down more osteoid and form trabeculae within the bone. Meanwhile osteoclasts break down spongy bone to create the medullary cavity, which contains bone marrow. Initially the bone material is deposited with the collagen fibres in random directions, meaning the strength is much lower than at the final stage in which the fibres are aligned. The primary structure is called woven bone because the collagen fibres are woven together randomly. This is then converted into lamellar bone over time, which is much stronger due to the aligned fibres. The osteoid deposited by the osteoblasts calcifies to initially produce primitive cancellous bone. At sites where cortical bone is required, further deposition of osteoid occurs to increase the density of the structure. At birth secondary ossification centres appear at either end of long bones. Between the primary and secondary centres is the epiphyseal plate, made of cartilage, which continues to form new cartilage and be replaced by bone such that the bone increases in length. This continues until a person is in their mid-twenties, when the plate is finally replaced by bone and no further growth occurs. Remodelling of bone - Ossification is also essential in the remodelling of bone. This occurs throughout a persons lifetime, with ossification and resorption (removal of bone tissue) working together to reshape the skeleton during growth, maintain calcium levels in the body, and repair micro-fractures caused by everyday stress. The remodelling of cortical bone follows the same process as shown above, but with a different geometry in order to form the concentric lamellae seen in osteons. Responsive material - Bone is considered to be a responsive material. The formation and resorption of bone occur continuously: the body responds to stress levels in different areas of bone to ensure the right amount of healthy bone tissue is maintained and the bone can be continually reshaped. A stress of 25–40 MPa is sufficient to maintain the correct levels of bone. If the bone is under-stressed for prolonged periods of time, bone wastage will set in, and the bones will become thinner. This can be an issue if a patient is bed-ridden for a long time, and is also observed in astronauts after long periods in space. A similar effect occurs during osteoporosis, in which the activity of osteoblasts decreases with age. This results in an imbalance of resorption and formation, causing bones to become thinner and weaker. The opposite effect can be seen when bones are suddenly subjected to higher levels of stress than normal. Studies have been conducted that show an increase in bone mass in new recruits to the army as they begin intensive training. Mechanical properties of bone = Introduction Although an organic material, bone can often be considered in the same way as man-made engineering materials. However, due to the nature of its synthesis it is likely to show more variation in measured properties than typical engineering materials. Factors include: * Age * Gender * Location in the body * Temperature * Mineral content * Amount of water present * Disease, e.g. osteoporosis These variables can to an extent be dependent on each other. For example, the mineral content will vary according to the bones location in the body, and with the age of the patient. As humans age, their bones typically become less dense and the strength of these bones decreases, meaning they are more susceptible to fracture. Osteoporosis is a disease involving a marked decrease in bone mass, and it is most often found in post-menopausal women. These variables mean that there is a range of measured properties for bone, and so values given in tables will always be an average, with quite a considerable spread possible in the data. In addition, the anisotropic structure of bone means that its mechanical properties must be considered in two orthogonal directions: * Longitudinal, i.e. parallel to osteon alignment. This is the usual direction of loading * Transverse, i.e. at right-angles to the long axis of the bone Modulus - Bone can be considered to consist primarily of collagen fibres and an inorganic matrix, and so on a simple level it can be analysed as a fibre composite. Composites are materials that are composed of two or more different components. They are commonly used in engineering and industry where the combination of the two materials creates a composite with properties that are superior to those of the individual components. The Youngs Modulus of aligned fibre composites can be calculated using the Rule of Mixtures and the Inverse Rule of Mixtures for loading parallel and perpendicular to the fibres respectively. RULE OF MIXTURESEax = f Ef + (1 - f) Em INVERSE RULE OF MIXTURES\( E\_{ax} = \left[ {\frac{f}{{E\_f }} + \frac{{\left( {1 - f} \right)}}{{E\_m }}} \right]^{ - 1} \) Where *E*f = Youngs Modulus of fibres *E*m = Youngs modulus of matrix *E*ax, *E*trans = Youngs Modulus of composite in axial and transverse directions *f*= volume fraction of fibres For the full derivation of these rules, . These formulae predict that the composite will be stiffer in the axial direction than the transverse, so cortical bone will be stiffer in the direction parallel to the osteons (i.e. parallel to the long axis of the bone). The chart below shows calculated values for the Young's Modulus of bone in both the longitudinal and transverse directions, for a range of fibre volume fractions, as well as the actual values. ![Calculated and experimental values of Youngs Modulus for cortical bone](images/cortical_updated.png) Calculated and experimental values of Youngs Modulus for cortical bone We can see that for the transverse direction, the composite model closely agrees with experimental values. However, in the longitudinal direction the difference is large, indicating the model does not give an accurate picture of the behaviour of bone. This difference occurs because the composite model of the microstructure of bone is highly simplified, since the collagen fibres are not aligned parallel to the axis of the osteons, and the bone mineral exists as discrete crystals, rather than forming a continuous matrix. A better approximation would be to model bone as a two level composite. One level is provided by hydroxyapatite-reinforced collagen in a single osteon, and the second level is obtained by the approximately hexagonal packing of osteons in a matrix of interstitial bone. The actual values for the Youngs modulus of bone, compared to collagen and hydroxyapatite, are shown in the table below. The measured value of Youngs Modulus also depends on temperature, decreasing with an increase in temperature, and the strain rate, increasing in value with an increase in strain rate. | | | | - | - | | **Material** | **Youngs Modulus, E (GPa)** | | Collagen (dry) | 6 | | Bone mineral (Hydroxyapatite) | 80 | | Cortical bone, longitudinal | 11-21 | | Cortical bone, transverse | 5-13 | Tensile and Compressive Strength As , bones such as the femur are subjected to bending moments during normal loading. These create both tensile and compressive stresses in different regions of the bone. There is a large variation in measured values of both the tensile and compressive strength of bone. Different bones in the body need to support different forces, so there is a large variation in strength between them. Additionally, age is an important factor, with strength often decreasing as a person gets older. | | | | | - | - | - | |   | **Longitudinal direction** | **Transverse direction** | | **Tensile strength (MPa)** | 60-70 | ~50 | | **Compressive strength (MPa)** | 70-280 | ~50 | Elasticity Bone mineral is a ceramic material and exhibits normal Hookean elastic behaviour, i.e. a linear stress-strain relationship. In contrast, collagen is a polymer that exhibits a J-shaped stress-strain curve. (See the TLP Typical stress-strain curves for compact bone, tested in tension or compression in the wet condition, are approximately a straight line. Bone generally has a maximum total elongation of only 0.5 - 3%, and therefore is classified as a brittle rather than a ductile solid. **Fracture Toughness** In contrast to the findings for tensile and compressive strength and modulus, the values of toughness in the transverse direction are generally higher than those in the longitudinal direction. This is due to the presence of the cement lines in the microstructure. These are narrow regions around the outermost lamellae in the osteons, and they form the weakest constituent of bone. Crack propagation parallel to the osteons can occur much more easily through these regions and this significantly decreases the fracture toughness of cortical bone in the longitudinal direction. If a crack is propagating perpendicular to an osteon it will change direction when it reaches a cement line, thus blunting the crack. This is illustrated in the animation below. As a result, although bone is classified as a brittle material (with the major component being mineral), its toughness is excellent. Bones fracture energy, Gc, is approximately 1.5 kJ m-2, which is comparable to steel at low temperatures and wood when measured parallel to the grain. This is much tougher than man-made ceramics due to the presence of the collagen fibres in bone. Since the stress-strain curves for loading and unloading are different the elasticity is therefore time-dependent, a common feature of fibrous proteins. For a full discussion of this see the TLP . Bone replacement Bone replacement materials can be needed for a variety of reasons. They are sometimes required when a section of bone is missing and the gap needs to be filled in, for example following an accident or after the removal of a tumour. There are several options for this type of bone replacement: * **Allografts** involve using material from another patient. However, there are risks of infection and the implant being rejected, and the strength of the replacement bone may be reduced due to sterilisation. * **Autografts** involve using material from the same patient, but from a different site (such as the pelvis). Although this reduces the chances of rejection, there is a limited amount of material available, and two surgical procedures are needed, leading to more pain and a higher risk of infection. * **Synthetic materials** are gradually becoming more popular. Hydroxyapatite can be prepared easily in a laboratory, but since it is a ceramic, it is too brittle to be used on its own for large-scale applications. Composites of hydroxyapatite with degradable polymers can also be used, which resorb over time and allow bone to regrow and fill the space. However, it is often not as simple a situation as needing to fill in a gap in an otherwise healthy bone, and other reasons for bone replacement materials being required are often age-related. Arthritis is a condition usually associated with old age in which the cartilage at joints wears away, meaning the bones at joints can rub against each other, causing pain and decreased mobility. Additionally, as people get older, their bones become more brittle, and they can experience extensive loss of bone mass (osteoporosis) that leaves them with an increased risk of a hip fracture. This is particularly true for female patients. In these situations, it is impossible to repair the existing bone and joint replacements are often required. Hip replacement is one of the most common implant surgeries, and this example will be discussed throughout the remainder of this TLP. An introduction to hip replacements = There are two main parts to a total hip replacement implant. The femoral component fits into the top of the femur and replaces the ball of the ball-and-socket joint. The acetabular cup sits in the pelvis and replaces the socket. This is shown in the diagram and micrographs below. ![Diagram and radiograph of a hip replacement ](images/ImageOfHip.png) Diagram and radiograph of a hip replacement The femoral component is fitted by inserting it into the shaft of the femur, which is prepared by hollowing out a section to fit the stem of the implant. The femoral stem essentially replaces bone in the femur and has a structural role that requires it to match, as closely as possible, the strength and toughness of the natural bone. The requirements for the femoral head component are that it must have a very particular shape, which fits exactly within the replacement socket, and the surface should have a low coefficient of friction. The acetabular cup component sits in the hip socket and replaces the worn cartilage. Therefore, this part must itself have a low wear rate and must also minimise wear of the femoral head component. It is therefore likely that these three components will be made from different materials and choosing materials with the correct properties is very important. Selecting implant materials = There are four main criteria to consider when selecting suitable implant materials. 1. **Biological Conditions** The human body is not an easy environment for a material to function in for prolonged periods. It must be able to operate for many years at a temperature of 37°C in a very moist environment. 2. **Response** Implant materials must be designed to minimise the adverse reactions associated with introducing a foreign material to the body. The immune system will typically attack anything that has originated outside the body, leading to inflammation. Elevated levels of particular metals in the bloodstream can lead to various problems including cytotoxicity and carcinogenesis. It is therefore crucial to choose materials that will have a minimal negative impact on the body. * Cytotoxicity - Having a toxic effect on cells, caused by increased levels of metal ions in the bloodstream. * Carcinogenesis - The formation of cancerous cells, which can be caused by elevated levels of certain metal ions in the body. 3. **Materials Properties** The materials used in each component of the hip implant must have suitable properties to allow them to replace the natural tissue and continue to perform the same functions. The mechanical property requirement for the femoral stem is primarily to support the loads that are applied, and therefore the modulus of the implant material is one of the main criteria. The femoral head and acetabular cup components are required to act as bearing surfaces, and therefore, the coefficient of friction and wear rates of these materials will be important. 4. **Cost** As with most areas of materials science, cost is an important contributing factor to the selection of materials. Often, manufacturers must strike a suitable balance between a materials performance and cost. These issues are discussed in the following two sections, along with information about the materials that are generally chosen for these components. Materials selection of femoral stem component = The femoral stem component replaces a large portion of bone in the femur, and this is therefore the load-bearing part of the implant. To bear this load, it must have a Youngs Modulus comparable to that of cortical bone. If the implant is not as stiff as bone, then the remaining bone surrounding the implant will be put under increased stress. If it is stiffer than bone, then a phenomenon known as stress shielding will occur. Stress Shielding As discussed earlier, bones are constantly being reshaped by osteoblasts and osteoclasts, through the continual formation and resorption of bone material. If the implant is much stiffer than the bone, then the implant will bear more of the load. Because the bone is shielded from much of the stress being applied to the femur, the body will respond to this by increasing osteoclast activity, causing bone resorption. Due to its higher surface area, cancellous bone is more biologically active because the cells involved in the formation and destruction of bone are found on the surface only. It is therefore more quickly and more drastically affected by stress shielding, wasting away up to four times as quickly as cortical bone. **Suitable Materials** Although 70 wt% of bone material is a ceramic, hydroxyapatite ceramics are not suitable materials for femoral stem replacements, as they are much too brittle. Polymers are also unsuitable as they are prone to suffer from creep and fatigue. Metals are generally used because they typically have a high Youngs Modulus, are tough and ductile meaning they yield before breaking, and have good fatigue resistance. They do, however, tend to be much stiffer than bone, which can lead to stress shielding. A useful tool for comparing the mechanical properties of different materials is the materials selection map. Full instructions on how to use these are given in the TLP . Below are two Materials Selection Maps showing some of the most commonly considered metals for femoral stem replacements. As can be seen from the selection map, steel has a Youngs modulus much higher than that of bone, meaning that stress shielding is a serious issue. Stainless steel was used for the femoral component in the earliest hip replacements, and is still used in some implants today. It is an alloy of iron, chromium and usually nickel and cobalt. It is resistant to corrosion, abundant and relatively cheap and easy to produce. However, some people have allergies to nickel, which would cause extreme adverse reactions after the implant operation. A cobalt-chromium-molybdenum alloy was later introduced as an alternative, since it had better wear properties than stainless steel. However, it is a harder metal, meaning it is more difficult to machine, and it is much more expensive. Titaniums modulus is only half that of stainless steel, so the remaining bone will suffer less from stress shielding. It has an excellent strength to weight ratio and an impressive resistance to corrosion. Titanium is also biologically inactive and resistant to creep deformation, and these advantages over stainless steel make it a very good choice of material for the femoral stem component. It is, however, significantly more expensive than stainless steel, costing up to five times as much per kilogram (although in making a hip implant a smaller mass of titanium would be used than steel).  Rather than using commercial purity (cp) titanium, the alloy Ti-6Al-4V (titanium alloy with 6% aluminium and 4% vanadium by weight) is often used as it gives increased toughness, as shown, and improved fatigue resistance. This alloy can also be treated during the sintering process in a way that controls its porosity. Porosity is the ratio of the volume of pores in a material to the volume of the whole material, and the value of the Youngs Modulus decreases with increasing porosity. As can be seen from the first Materials Selection map, a porosity of 40% gives properties which match those of cortical bone extremely well. Fixation The femoral stem component is inserted into the femur, which has been prepared to fit the component. This is traditionally bound to the bone with a polymer bone cement, polymethylmethacylate (PMMA). As well as fixing the implant in place, the cement helps distribute load more evenly between the implant and bone. The drawback with this method is that during the curing process (hardening through the cross-linking of polymer chains) a large amount of heat is released that can cause necrosis (cell death) in the bone around the implant and lengthen recovery time. Rather than using bone cement, an alternative fixation method is to introduce a porous surface layer to the implant, which encourages bonding by allowing bone to grow into the pores. The bone and implant therefore become integrated, meaning that the implant is less prone to loosening. A further modification is to coat the implant with a layer of hydroxyapatite. Since its chemical composition is similar to that of bone mineral, the coating enhances bone growth. Many surgeons now favour these uncemented implants as they give a quicker recovery time, with many patients able to put weight on their hip the day after surgery. Coating with hydroxyapatite does raise some problems of its own, however. The coating is done at an elevated temperature, and as the implant cools, the alloy and the HA contract at different rates, because hydroxyapatites thermal expansion coefficient is higher than that of the titanium alloy. This can generate thermal stresses and cause cracking of the surface of the implant. In an attempt to match the thermal expansion coefficients (and so avoid cracking), manganese can be added to the alloy to raise its expansion coefficient. Materials selection of femoral head and acetabular cup components = The choice of design for the femoral head and acetabular cup components can be broken down into two main categories: hard-on-hard and hard-on-soft. Hard-on-hard describes implants in which both components are made from either metal or ceramic, and hard-on-soft describes those in which the femoral head is made of a metal or ceramic and the acetabular cup is a polymer. Hard-on-Hard Implants - Metal-on-metal implants were first developed in the 1960s but have since been greatly improved. Cobalt chrome is a popular choice, although some studies have found that metal-on-metal implants can cause elevated levels of the metal ions in urine and the bloodstream. This indicates that wear produces particles that enter the body, and which may have an adverse effect. This is particularly a problem for people with poor kidney function. Another possibility is implants in which both the femoral head and the acetabular components are made from a ceramic, such as alumina or zirconia. The main issue to note with these is that the ball and the cup must be manufactured as a pair. They must exactly fit one another, otherwise chipping will occur, and ceramic particles will be present in the joint. Hard-on-Soft Implants - Early implants had a metal femoral head and an acetabular component made from ultra high molecular weight polyethylene (UHMWPE), and this is still one of the most popular styles of implant. UHMWPE has densely packed linear polyethylene chains, which gives increased crystallinity and improved mechanical properties, although it leads to a decrease in ductility and fracture toughness. The main problem with this combination of materials is wear of the acetabular cup, which can lead to the formation of small particles of the polymer and inflammation. A further operation may be also be required at a later date to replace the worn component. Studies have shown that increasing the cross-linking in the polyethylene significantly reduces wear, leading to more durable acetabular components, thus increasing the lifetime of an implant Alternatively, ceramics such as alumina or zirconia can be used to manufacture the femoral head. These can be polished to give a very smooth surface and have a much lower wear rate than metal on polyethylene. These improved wear properties are dependent on a small, uniform grain size in the ceramic, so its microstructure must be carefully controlled during the manufacturing process. Summary = * Bone is a biomaterial with a complex hierarchical structure, which gives it some impressive material properties. * It is primarily composed of a bioceramic (similar to hydroxyapatite) and collagen, a fibrous protein. * At a microscopic level, it can be seen that the collagen-bone mineral composite forms concentric lamellar structures known as osteons, which are the main structural element of bone. The osteons are densely packed together in cortical bone and their long axes tend to run parallel to the long axis of the bone. * In common with many biomaterials, bone is anisotropic: its mechanical properties differ depending on the orientation of the sample being tested. * Hip replacements are a very common surgical procedure, particularly among older people, as bones can become more brittle with age. * There are three main parts to a total hip replacement: the femoral stem, femoral head, and acetabular components. * The main challenge with hip replacements is to find a material that has mechanical properties similar to that of bone, is capable of operating for many years in the biological conditions in the human body, and causes minimal adverse host response from the body. * The femoral stem is the main load-bearing component. Increasingly the titanium alloy Ti-6Al-4V with 40% porosity is used, as it combines excellent material properties with the advantage of being, to a large extent, biologically inert. * The main mechanical requirement of the femoral head and acetabular cup is to minimise wear. The femoral head is generally a highly polished metal or ceramic, and the acetabular cup is usually made of UHMWPE - a dense, crystalline polyethylene. You should note that no one material or design has emerged as the definitive hip replacement; each has its own advantages and disadvantages, which must be considered for each individual patient taking into account their age, general health and lifestyle. Also, as is often the case, cost is a very important factor for biomaterials companies, with superior mechanical properties sometimes being sacrificed for ease of production.   Questions = ### Quick questions*You should be able to answer these questions without too much difficulty after studying this TLP. If not, then you should go through it again!*1. Bone is a composite of a mineral which can be approximated to hydroxyapatite, and which protein? | | | | | - | - | - | | | a | Keratin | | | b | Elastin | | | c | Collagen | | | d | Skelatin | 2. Which component of the hip implant is often made from ultra high molecular weight polyethylene? | | | | | - | - | - | | | a | Acetabular cup | | | b | Femoral head | | | c | Femoral stem | | | d | Bone cement | 3. A titanium alloy is often used for the femoral stem component of the hip implant. Which other metals are present in this alloy? | | | | | - | - | - | | | a | Aluminium and Vanadium | | | b | Aluminium and Chromium | | | c | Chromium and Molybdenum | | | d | Cobalt and chromium | 4. Fracture of bone occurs most easily: | | | | | - | - | - | | | a | In the transverse direction, perpendicular to the long axes of the osteons | | | b | Along the cement line, parallel to the long axes of the osteons | | | c | Between different lamellae in the same osteon | | | d | Parallel to the collagen fibres |### Deeper questions*The following questions require some thought and reaching the answer may require you to think beyond the contents of this TLP.*5. Which of these statement(s) is/are true? | | | | | | - | - | - | - | | Yes | No | a | Bone is stronger in compression than tension | | Yes | No | b | Cortical bone has a J-shaped stress-strain curve | | Yes | No | c | Tensile and compressive strength are higher in the longitudinal direction - parallel to osteon alignment. | | Yes | No | d | The bone mineral forms a continuous matrix in the bone material. | 6. Which of these statement(s) is/are true? | | | | | | - | - | - | - | | Yes | No | a | Titanium has a Young's Modulus of around half that of stainless steel | | Yes | No | b | Increasing the porosity of the titanium or titanium alloys will increase its Young's Modulus. | | Yes | No | c | Uncemented implants have roughened surfaces to encourage bone growth. | | Yes | No | d | Metal femoral heads have much lower wear rates than ceramic ones when used with a polyethylene acetabular cup. | | Yes | No | e | Metal-on-metal implants can lead to elevated levels of metal ions in the bloodstream. | 7. Hydroxyapatite has a density of 3.16 Mg m-3 and collagen's density is 1.33 Mg m-3. Using the weight percentages for dry bone given, calculate the volume fractions of bone mineral and collagen in cortical bone. Use these volume fractions and the Young's Moduli of these components to calculate a value for the Young's modulus of dry cortical bone. Explain any difference between this and the value quoted in the TLP. 8. In this diagram of the top of a femur, the thickness of the cortical bone in the main shaft of the femur is marked. This thickness is roughly 1 cm, with the marrow space located in the centre. Using this diagram, make estimates of the maximum stress generated in the femur when (A) standing still (B) walking down stairs ![human femur](images/humanfemur2.jpg) (S. Blatcher, PhD Thesis, Queen Mary College, London, 1995) Going further = ### Books | | | | | - | - | - | | **Title** | **Author** | **Publisher** | | Biomaterials Science  | Ratner et al   | Academic Press (1996) | | Introduction to Bioceramics | Hench, Wilson | World Scientific (1993) | | Bone Engineering | Davies | em squared Inc. (2000) | ### Websites * NASA page on * NASA on * If you are interested in knowing more about * Website which includes a
Aims On completion of this TLP you should: * Understand how to generate the reciprocal lattice from the real space lattice * Know how to construct Brillouin Zones in reciprocal space * Be familiar with the Brillouin Zones for some simple two- and three-dimensional structures Before you start Though this package is largely self-contained, some familiarity with simple crystal lattices is useful. If you want a reminder, look at the teaching and learning package on and . Introduction Leon Brillouin (1889-1969) first introduced Brillouin Zones in his work on the general properties of periodic structures. The concept and construction of the zones can appear quite abstract, but this is largely a result of their wide range of useful applications. Brillouin zones are polyhedra in reciprocal space in crystalline materials and are the geometrical equivalent of Wigner-Seitz cells in real space. Physically, Brillouin zone boundaries represent Bragg planes which reflect (diffract) waves having particular wave vectors so that they cause constructive interference. Reciprocal lattice vectors Every periodic structure has two lattices associated with it. The first is the real space lattice, and this describes the periodic structure. The second is the reciprocal lattice, and this determines how the periodic structure interacts with waves. This section outlines how to find the basis vectors for the reciprocal lattice from the basis vectors of the real space lattice. Reciprocal lattice vectors, **K**, are defined by the following condition: $${e^{i{\bf{K}} \cdot {\bf{R}}}} = 1$$ where **R** is a real space lattice vector. Any real lattice vector may be expressed in terms of the lattice basis vectors, **a1, a2, a3**. $${\bf{R}} = {c\_1}{{\bf{a}}\_{\bf{1}}} + {c\_2}{{\bf{a}}\_{\bf{2}}} + {c\_3}{{\bf{a}}\_{\bf{3}}}$$ in which the *c**i* are integers. The condition on the reciprocal lattice vectors may also be expressed as $${\bf{K}} \cdot {\bf{R}} = 2\pi .n$$where *n* is an integer. This expression can be satisfied if **K** is expressed in terms of the reciprocal lattice basis vectors **b*i***, which are defined as $$\eqalign{ & {{\bf{b}}\_{\bf{1}}} = {{2\pi \cdot {{\bf{a}}\_{\bf{2}}} \times {{\bf{a}}\_{\bf{3}}}} \over {\left| {{{\bf{a}}\_{\bf{1}}} \cdot {{\bf{a}}\_{\bf{2}}} \times {{\bf{a}}\_{\bf{3}}}} \right|}} \cr & {{\bf{b}}\_{\bf{2}}} = {{2\pi \cdot {{\bf{a}}\_{\bf{3}}} \times {{\bf{a}}\_{\bf{1}}}} \over {\left| {{{\bf{a}}\_{\bf{1}}} \cdot {{\bf{a}}\_{\bf{2}}} \times {{\bf{a}}\_{\bf{3}}}} \right|}} \cr & {{\bf{b}}\_{\bf{3}}} = {{2\pi \cdot {{\bf{a}}\_{\bf{1}}} \times {{\bf{a}}\_{\bf{2}}}} \over {\left| {{{\bf{a}}\_{\bf{1}}} \cdot {{\bf{a}}\_{\bf{2}}} \times {{\bf{a}}\_{\bf{3}}}} \right|}} \cr} $$ Note that **b2** and **b3** are given by cyclic permutations of the expression for **b1** . From this expression it may be seen that the real lattice basis vectors and the reciprocal lattice basis vectors satisfy the following relation: $${{\bf{b}}\_i} \cdot {{\bf{a}}\_j} = 2\pi {\delta \_{ij}}$$ where \({\delta \_{ij}}\)>is the Kronecker delta, which takes the value 1 when *i* is equal to *j*, and 0 otherwise. Any reciprocal lattice vector may then be expressed as a linear sum of these reciprocal basis vectors: $${\bf{K}} = h{{\bf{b}}\_{\bf{1}}} + k{{\bf{b}}\_{\bf{2}}} + l{{\bf{b}}\_{\bf{3}}}$$ in which *h*, *k* and *l* are integers. The set of all **K** vectors defines the reciprocal lattice. Brillouin Zone construction = The reciprocal lattice basis vectors span a vector space that is commonly referred to as reciprocal space, or often in the context of quantum mechanics, **k** space. This section covers the construction of Brillouin zones in two dimensions. The first step is to use the real space lattice vectors to find the reciprocal lattice vectors and construct the reciprocal lattice. One of the points in the reciprocal lattice is then designated to be the origin. When constructing Brillouin zones, they are always centred on a reciprocal lattice point, but it is important to keep in mind that there is nothing special about this point as each reciprocal lattice point is equivalent due to translation symmetry. Draw a line connecting this origin point to one of its nearest neighbours. This line is a reciprocal lattice vector as it connects two points in the reciprocal lattice. ![Diagram showing construction of a Brillouin Zone](images/img001.jpg)Then draw on a perpendicular bisector to the first line. This perpendicular bisector is a . ![Diagram showing construction of a Brillouin Zone](images/img002.jpg) Add the Bragg Planes corresponding to the other nearest neighbours. ![Diagram showing construction of the first Brillouin Zone](images/img003.jpg) The locus of points in reciprocal space that have no Bragg Planes between them and the origin defines the first Brillouin Zone. It is equivalent to the Wigner-Seitz unit cell of the reciprocal lattice. In the picture below the first Zone is shaded red. ![Diagram showing construction of the first Brillouin Zone](images/img004.jpg) Now draw on the Bragg Planes corresponding to the next nearest neighbours. ![Diagram showing construction of the second Brillouin Zone](images/img005.jpg) The second Brillouin Zone is the region of reciprocal space in which a point has one Bragg Plane between it and the origin. This area is shaded yellow in the picture below. Note that the areas of the first and second Brillouin Zones are the same. ![Diagram showing construction of the second Brillouin Zone](images/img006.jpg) The construction can quite rapidly become complicated as you move beyond the first few zones, and it is important to be systematic so as to avoid missing out important Bragg Planes. Click the animation below for an interactive illustration that follows this process to show how to construct the first six Brillouin Zones for the 2-D square reciprocal lattice. Use the arrow buttons to navigate forwards and backwards through the different steps. **2-D square lattice** As a further example, click the on the animation below for an interactive illustration showing how to construct the first six zones for a 2-D hexagonal reciprocal lattice. **2-D hexagonal lattice** The general case in three dimensions After considering these simple examples, it should hopefully be clear how to construct Brillouin Zones. Whilst in two dimensions this geometric method is easy to apply, in three dimensions the lattice cannot be represented on a piece of paper and in general it is much harder to picture the shape of the Brillouin Zones beyond the first. This section considers how the relevant Bragg planes for zone construction may be generated in a systematic fashion. In vector notation, the equation of a plane may be written as $${\bf{(r}} - {\bf{a)}} \cdot {\bf{\hat n}} = 0$$In this expression, **a** is a vector from the origin to a specified (but arbitrary) point in the plane, **r** is a general point in the plane and \({\bf{\hat n}}\) is the unit vector normal to the plane. ![Diagram showing the vector plane](images/img007.jpg) For Brillouin Zones it is convenient to choose a so that it is the perpendicular vector from the origin to the Bragg plane of interest, i.e., a vector of the form $${\bf{a}} = {1 \over 2}(h{{\bf{b}}\_{\bf{1}}} + k{{\bf{b}}\_{\bf{2}}} + l{{\bf{b}}\_{\bf{3}}})$$ also, the unit normal is then given by \({\bf{\hat n}} = {{\bf{a}} \over {\left| {\bf{a}} \right|}}\) Letting *h*, *k* and *l* be integers (positive or negative) and excluding the point where all three are equal to zero, the relevant Bragg Planes may be generated in a systematic fashion. It is usually sufficient for finding the first three or four zones to have the *di* range between –3 and +3. Then, given any point in reciprocal space, it may be allocated to a Brillouin zone by determining the number, *N*, of Bragg Planes that lie in between that point and the origin. The point is then in the (*N*+1)th Brillouin Zone. Whether a Bragg plane lies between a general point, **r**, and the origin may be determined quite simply by considering the projection of the position vector of the point in the direction of the unit normal to the Bragg plane. The scalar product **a**.(**r** – **a**) is positive if the plane is between the point and the origin, and is negative when the plane does not lie between the point and the origin. The main complication when extending to Brillouin Zones beyond the first zone is that it is very easy to overlook an important Bragg Plane. This can only really be avoided by being careful and systematic. A useful test to indicate if a mistake has been made is to check that all the zones have the same symmetries as the reciprocal lattice itself. For example the Brillouin Zones for the 2-D Square lattice always have fourfold-rotational symmetry about the origin. A further useful check is to confirm that the area (in 2D) or volume (in 3D) of each Brillouin zone is the same. Some examples which show the Brillouin Zones for common 2-D lattices may be found by clicking the links below. * (pdf file, 0.53 MB) * (pdf file, 0.60 MB) Zone folding An important property of the Brillouin Zones is that, because the reciprocal lattice is periodic, there exists for any point outside the first zone a unique reciprocal lattice vector that will translate that point back inside the first zone. Each point in reciprocal space is only unique up to a reciprocal lattice vector. Each Zone contains every single physically distinguishable point, and so they all have the same area (in 2-D) or volume (in 3-D). This is easiest to see by example. The links below are for interactive illustrations which will show how the first six zones for the 2-D square and hexagonal lattices can be translated or 'folded' back on top of the first zone. Use the arrow buttons to navigate through the different steps. **2-D square Zone folding** **2-D hexagonal Zone folding** Examples of Brillouin Zones in Three Dimensions = The extension to three dimensions is straightforward by the method outlined in the . However, it is important to remember not to overlook important Bragg Planes. There is also a problem of representation, because the structure of the Zones in 3-D can be quite complicated and hard to visualise. It is also important to remember that in 3-D the Zones all have the same volume, and that the volume corresponding to the 3rd Zone is the volume between the outer surface of the 2nd Zone and that of the 3rd Zone. Representations of the Brillouin Zones corresponding to Simple Cubic (SC), Body Centred Cubic (BCC), Face Centred Cubic (FCC) lattices are linked to below. Use the buttons on the left hand side to select which Zone Surface to view, and use the arrow buttons to rotate the view about the [001] axis. **Simple Cubic** **Body Centred Cubic** **Face Centred Cubic** Summary = This teaching and learning package has introduced Brillouin zones and their construction in two and three dimensions. The relationship between the construction of Brillouin zones and diffraction from Bragg Planes has been shown by a consideration of the Bragg equation. The examples shown in the package itself demonstrate the principles of the construction in two dimensions. In three dimensions the principles of the construction are the same, but the Brillouin Zones are hard to visualise. Rotating the shapes of Brillouin Zones about a cube axis as in the examples we have given in this package helps to improve the visualisation in comparison with simple line diagrams found in textbooks. Questions = ### Quick questions*You should be able to answer these questions without too much difficulty after studying this TLP. If not, then you should go through it again!*1. If a real space vector has length *L*, what is the magnitude of the corresponding reciprocal lattice vector as defined in this TLP? | | | | | - | - | - | | | a | | | | b | | | | c | | | | d | | 2. Which of the following statements about the Brillouin zones for a particular reciprocal lattice are correct? (select yes for true and no for false for each statement) | | | | | | - | - | - | - | | Yes | No | a | They all have the same area / volume. | | Yes | No | b | They all have the same shape. | | Yes | No | c | They all have the same symmetry. |### Deeper questions*The following questions require some thought and reaching the answer may require you to think beyond the contents of this TLP.*3. By either printing out the lattice in the file (pdf), or by drawing out a rectangular lattice with the ratio between the short and long sides of 4:5. Use the method outlined earlier to construct the first three Brillouin zones, being careful not to miss any relevant Bragg planes. 4. By either printing out the lattice in the file (pdf), or by drawing out a parallelogram lattice with the ratio between the short and long sides of 4:5 and an angle between the sides of about 75 degrees. Use the method outlined earlier to construct the first three Brillouin zones, being careful not to miss an relevant Bragg planes. 5. Confirm that when expressed in a Cartesian basis, possible lattice vectors of the primitive unit cell (i.e., the unit cell with one lattice point per unit cell) for a FCC lattice of unit side length are ![](images/qn05a.gif), ![](images/qn05b.gif) and ![](images/qn05c.gif). 6. Confirm that when expressed in a Cartesian basis, possible lattice vectors of the primitive unit cell for a BCC lattice of unit side length are ![](images/qn06a.gif), ![](images/qn06b.gif) and ![](images/qn06c.gif). 7. Using the definition of the reciprocal lattice basis vectors given in the TLP, show that the reciprocal lattice of a FCC lattice is a BCC lattice and vice versa. 8. For a conventional BCC unit cell with two lattice points per unit cell of side *L*: 1. How many nearest neighbours does each point have? 2. What are their coordinates, and what distance away are they? 3. How many next nearest neighbours are there? 4. What are their coordinates and what distance away are they? 9. Look at the picture below of the first Brillouin zone for a real space FCC lattice. Using your answers to questions 5 and 6, try to index the planes labelled (a) and (b).![](images/qn09.gif) Going further = ### Books There a number of textbooks which discuss in varying levels of detail Brillouin Zones and their construction and application. The classic text on Brillouin Zones is * Wave Propagation in Periodic Structures, L. Brillouin (Dover 1953, reprinted 2003) Other texts are also useful, such as: * Solid State Physics, J.S. Blakemore (Saunders 1974) * Electronic Properties of Materials, R.E. Hummel (Springer-Verlag 1992) * Introduction to Solid State Physics, C. Kittel (Wiley 1996) * Solid State Physics, N.W. Ashcroft and N. D. Mermin. (Harcourt Brace 1976) ### Websites * This wikipedia page has a number of useful external links which should help to reinforce and confirm the concepts introduced in this teaching and learning package. Further links can be readily found by typing “Brillouin Zones” into Google and searching the web. Examples of such links are: * *  
Aims On completion of this TLP you should understand: * that materials break by cracking; * what determines whether a material will crack or not; * what determines whether cracking is catastrophic or more gradual; * the concepts of the *fracture energy, strain energy release rate*, *fracture toughness* and *stress intensity factor*. Before you start You need not do it now, but you may want to look at the TLP on . Introduction The consequences of something breaking can be a pest, or utterly disastrous, as when the pedal drops off ones bike, but without it, biting and crunching, breaking into crisp packets, pulverizing coal, oil drilling and many other processes would be impossible. The most dramatic failures are catastrophic, but sometimes they can be very gradual even in the most brittle materials. Here we discuss what determines when a material will break, and whether failure will be catastrophic or more gradual. We show cracking is controlled by the energy changes that occur - it is **not** the stress **at** the crack tip that is important.. The emphasis here is on brittle fracture, and although all of this is relevant to metals, we do not discuss the details of ductile fracture. When do atomic bonds break? =Fracture is the separation of atoms. We normally do this by applying a force to the body. How big a force should we need? We want to know how the energy, U, changes with distance, r, between two atoms. A commonly used expression (known as the is $$U(r) = 4\varepsilon \left[ {\left( {{{{\beta ^{12}}} \over {{r^{12}}}}} \right) - \left( {{{{\beta ^6}} \over {{r^6}}}} \right)} \right]$$ Other potentials exist, but the argument is essentially the same. This contains a short range repulsive term, and a long range attractive term. The parameter ε is a measure of the depth of the potential well, and β is the non-infinite distance where the interparticle potential equals zero. Use the animation to see how the energy changes as the distance between two atoms is varied. The force between a pair of atoms is calculated by taking the derivative of the energy function, giving: $$F = {{{\rm{d}}U{\rm{(r)}}} \over {{\rm{d}}r}} = 24\varepsilon \left[ { - 2\left( {{{{\beta ^{12}}} \over {{r^{13}}}}} \right) + \left( {{{{\beta ^6}} \over {{r^7}}}} \right)} \right]$$ Now see how the force between the atoms changes with distance. What is the force needed to separate the atoms in a crystal? Because we are interested in the materials properties we normally think of failure *stresses*, so in a unit area of crystal we need to know how many bonds in an area of crystal normal to the applied force are carrying the load, say 6.25 × 10−22 m2. From the animation it is clear that materials should have breaking stresses of the order of 10 GPa, but a piece of glass normally breaks at a stress of 70 MPa. Where have we gone wrong? - What have we just estimated? It is the stress required to break the bonds on a given plane simultaneously, what becomes the fracture surface. So do all the bonds on a given plane break at once? Anybody who has opened a packet of crisps or torn a sheet of paper knows the answer is ‘no – the bonds break a few at a time – at the tip of a growing crack. So the analysis above does not describe what actually happens when a material breaks, which is a relief as the predictions were no good anyway. Why do cracks weaken a material? Lets watch a crack grow. Weve used paper here, because the loads are low and weve applied a force to the paper sheet by stretching it across a metal strip in bending, as shown in the video. ![Loading situation of a strip of graph paper](images/IMG_1501_s.jpg) Loading situation of a strip of graph paper. Your browser does not support the video tag. Video of paper tearing at a preformed crackYou can see that initially the paper is quite capable of withstanding the stress applied to it by the loading jig. And this is fine even when there is quite a big cut in it. But when the cut is lengthened the crack grows quite suddenly. Is the effect simply one of decreasing the intact section area, causing an increase in stress on the intact section? Effect of cracks – reducing the intact cross-section Strips of paper 200 mm long and 70 mm wide were cut. Some had notches cut into one side giving a reduction in the intact cross-section of 5.16 × 10−6 m2. These and strips without notches were stretched until they broke in a tensile test machine. The average stress required to break the strip without a notch was measured to be 19.3 MPa. Where the intact cross-section was reduced by a factor of 1.14 the average breaking stress was measured as 9.4 MPa. So the breaking strength falls by a greater factor than the reduction in cross-sectional area. This is why cracks are so dangerous – because they weaken the material more than you would expect from the reduction in intact cross-section. Why? Inglis and the crack tip stress idea In 1913, Inglis calculated what the stresses and strains were in an elastic plate containing an elliptical crack, with semi-axes b and c, and under an applied stress σ - applied vertically in this case. ![Image of an elastic plate containing an elliptical crack](images/crack-dimensions.jpg) He found that the stress at the crack tip, σt , was given by $${\sigma \_{\rm{t}}} = \sigma {\rm{ }}\left( {1 + 2{c \over b}} \right)$$ For a sharp crack, i.e. c >> b, the stress would be much greater at the crack tip So failure could occur by cracking because its only at a crack tip that the stress required to , or simultaneously break a given number of bonds in a unit area of material, is reached. So far, so good, but σt depends on the SHAPE of the crack, i.e. c / b – and we know that the length is important. By looking at crack tip stress fields using , we can see that stresses are indeed **.** Inglis was correct about this. It's the idea that a critical stress to break a bond is needed that is wrong. Can we calculate the energy changes? In the beginning, we calculated the force, and hence the stress, for failure by considering the energy changes as the atoms moved apart. Why not consider cracks like this? The first person to do this was A.A. Griffith in 1920. He considered a body in tension, but tension is rather complicated, so lets consider another stress-state – wedging. Wedging is what you do when you split a piece of wood, or try to peel paper off the wall by getting your fingernail underneath it. Its shown in the animation where a block of thickness h, is being driven in under a layer of thickness d. So why does the energy, U, of the body change with crack length? From the animation you can see there are 3 contributions: Pushing the wedge in causes the peeling layer to bend more, increasing the strain energy, UE. And only the layer between the crack tip and the point where the layer touches the wedge is bending. Using the (a more detailed derivation can be found ) gives UE as $${U\_{\rm{E}}} = {{E{d^3}{h^2}} \over {8{c^3}}}$$ What the about work done by the applied force, UF? We can see that the action of the wedge is the same as a force applied at the point where the wedge touches the peeling ligament. So as the crack grows the force moves sideways, i.e. perpendicular to its line of action, so no work is done. So UF = 0 Finally, as the crack length changes, the energy of the surfaces, US, changes, giving US = c R, where R is known as the fracture energy, the energy required to create new surfaces. The total energy, U, of the body is just these 3 terms added up. So what does this have to do with cracking? - Again explore the animation above and start with the default values. What will be the crack length, c\*, where the body has the lowest energy? We can find c\* by differentiating the expression for u with respect to c and setting dU / dc = 0. What will happen if the crack is shorter than this? The energy of the body would increase if c < c\*, so the crack would grow until c = c\*, and then stop. This is called stable cracking – it happens where there is a stable equilibrium in the energy change with crack length. But what happens if the initial crack length is longer than the equilibrium value, or if we pull the wedge out? What is predicted? What happens if the wedge is pulled out? If we pull the wedge out, the crack length should decrease; in other words, the crack should heal. To test this, Obreimoff cleaved crystals of muscovite mica by wedging. On pushing the wedge in he found that the crack grew at a constant distance ahead of the crack tip, as predicted, and estimated a value of R for mica close to that measured elsewhere. Then he pulled the wedge out. His first tests in air showed no crack healing. Then he tried testing in vacuum – healing really did occur. Obreimoff thought that this occurred because in air chemical groups attached themselves to the fresh mica surfaces, just as fluff sticks on a toffee in your pocket, so that the surfaces no longer stick. This showed cracking really was thermodynamic in nature – totally different from the critical stress ideas. This reversibility, crack healing, seems a strange idea at first. But we know surfaces stick together – that is the cause of friction – and there have been many observations of this on metal sheets, on probes in . And it is what we would expect if we think of a surface in terms of unfulfilled chemical bonds, desperate for other electrons. What about tension? = It is not as easy to estimate UE for tension. Griffith used Inglis analysis for the overall stress and strain fields in a cracked body, under a constant applied load. This gives UE as $${U\_{\rm{E}}} = {{\pi {c^2}{\sigma ^2}} \over E}$$ In other words there is an **increase** in the elastic strain energy as the crack grows. However when the crack grows, work is done by the applied force, F, and is equal in magnitude to twice the change in elastic strain energy. (Think of the work done by the applied force and the elastic energy changes on a uniform bar loaded in tension.) As it is work done **on** the system, the sign is opposite to that of UE, so that $$ - {U\_{\rm{F}}} + {U\_{\rm{E}}} = - {{\pi {c^2}{\sigma ^2}} \over E}$$ where UF is the work done by the applied force, UE is the elastic energy change on cracking and US is the work required to create two new surfaces. The work associated with creating new crack faces, US. US = 2 c R where R is the fracture energy. Combining the terms of this energy expression we obtain: $$U = - {{\pi {\sigma ^2}{c^2}} \over E} + 2cR$$ The energy function is plotted in the following animation: How does cracking occur now? The energy terms above vary with c as shown. Although there is an equilibrium, it is unstable in tension whereas it was stable in wedging. Adding the energies and differentiating gives the equilibrium crack length ce as $${c\_{\rm{e}}} = {{E{\rm{ }}R} \over {\pi {\sigma ^2}}}$$ Since this is an unstable equilibrium, once it is surpassed, fracture will occur, but will it be catastrophic? Another way of expressing the energies What we have been doing so far is to calculate how U varies with c, and then find the equilibrium value of c by differentiating U with respect to c. Looking at the terms they fall into two types: Those that come from the loading system, UE and UF: lets add these together and call the sum UM * These give the driving force for cracking Those that are associated with the material, US * This gives the resistance to cracking Equilibrium will occur when $${{{\rm{d}}U{\rm{(}}c{\rm{)}}} \over {{\rm{d}}c}} = 0$$ Breaking our energies into the two different types, gives $$ - {{{\rm{d}}{U\_{\rm{M}}}} \over {{\rm{d}}c}} = {{{\rm{d}}{U\_{\rm{S}}}} \over {{\rm{d}}c}}$$ From our expression for cracking in tension $${{{\rm{d}}{U\_{\rm{S}}}} \over {{\rm{d}}c}} = 2R$$ and $${{{\rm{d}}{U\_{\rm{M}}}} \over {{\rm{d}}c}} = - {{2\pi {\sigma ^2}} \over E}{\rm{ }}c = 2G$$ where G is the energy per unit area of crack and so is often called the strain energy release rate, or the crack driving force. Now the turning point occurs when G = R This is our condition for cracking, that the crack driving force, G, equals the fracture resistance of the material, R. Another way of calculating the energies =Consider a crack growing from C by an amount δc to *C*'. Before cracking occurs there are stresses acting ahead of the crack and across the plane C − C'. After crack growth these stresses have been relaxed to zero as the newly created surfaces move from their initial position at u = 0 to form the crack. ![Diagram of a crack growing](images/crackpropagation.gif) If we knew the stresses ahead of the crack and displacements that relaxed them to zero, we could estimate the total mechanical work, δUM, when the crack grows by δc. What are the displacements, u? To find these we make a sharp cut in a rubber sheet, something compliant enough to get measurable shape changes without breaking, and stretch it, see below ![](images/DSC_0096_s.jpg) We can plot the opening of the crack, *u*, against the distance from the crack-tip, r. Remember r is taken as positive ahead of the growing crack. The opening is therefore at negative values of r. From the picture u ∝ −r½ where *r* is the distance from the crack tip, taken as positive in the direction ahead of the growing crack. The crack opening, *u*, at a given point is therefore at a negative value of r. A full elastic analysis gives $$u = 4{K \over E}{\left( {{{ - r} \over {2\pi }}} \right)^{{1 \mathord{\left/ {\vphantom {1 2}} \right. \kern 0.1em} 2}}}$$ where *K* is a constant known as the *stress intensity factor* and E is the Young modulus. We can see this well by simply stressing a rubber sheet with a cut slit, the shape of the crack being clearly parabolic. A curve with equation  y = m(x − a)2 + c can be fitted to the tip of the curve. ![equation y=m(x-a)^2+c fitted to curve](images/crack_tip_curve.jpg) What are the stresses ahead of a crack tip? - If the crack opening is parabolic, it seems not unreasonable to think that the stresses ahead of the crack (not at the crack tip) are too, so $$\sigma = {K \over {{{\left( {2\pi r} \right)}^{{1 \mathord{\left/ {\vphantom {1 2}} \right. \kern 0.1em} 2}}}}}$$ We can reassure ourselves by carrying out a . Taking the expressions for u and σ and as the elastic energy is proportional to σ × u, we can see that the change in mechanical energy, δUM, is likely to be proportional to K2/E. To check we need to integrate δUM over the increment of crack growth δc. See . As expected, this gives: $$G = {{{K^2}} \over E}$$ Now, we said cracking occurred when G = R, an equivalent expression is K = KC where *K*C is the critical stress intensity factor at which the crack will grow, known as the fracture toughness. Both criteria are based on the idea that the mechanical energy per increment of crack growth must reach some critical value. Both are entirely equivalent and both calculate the energy changes on cracking. Why bother if they are the same? K is useful because stress intensity factors, like stresses, are additive. Irwin showed that any loading state could be broken down into 3 different types of loading that he called modes, I, II and III: ![Diagram of 3 different types of loading ](images/Irwin.gif) To get the total K, we just sum the contribution from each of the three modes. This is much more difficult to do with energies, where we would have to think about interaction terms. Coping with a scatter in strength = The Griffith expression shows that the stress required for failure in tension is dependent on the size of the largest flaw, c, according to \(1/\sqrt c \). In very brittle materials the flaw sizes cannot be easily measured, so it is sometimes impossible to calculate a minimum strength. Using a method by Weibull, we can then find the chance of survival of a sample as a function of applied stress. We can then extrapolate back to an acceptable probability of failure and find the corresponding stress. The gives a relationship of $$\ln \ln {1 \over {{S\_n}}} = m\ln \sigma - (m\ln {\sigma \_{\rm{o}}} + \ln N)$$ where Sn is the probability of survival, and σ is the fracture stress. We can treat our results from the tensile of testing of paper with Weibull statistics: Using Weibulls method To determine the survival probability associated with each stress we start by testing some samples, in this case the graph paper used in the first video of the TLP. The following table shows all the collected data, and the calculated values needed: | | | | | | | - | - | - | - | - | | *n* | UTS | *S*n | ln ln (1/*S*n) | ln σ | | | /MPa | | | | | 1 | 21.28 | 0.07 | 0.9704 | 16.8733 | | 2 | 20.86 | 0.14 | 0.6657 | 16.8531 | | 3 | 20.20 | 0.21 | 0.4321 | 16.8213 | | 4 | 20.20 | 0.29 | 0.2254 | 16.8211 | | 5 | 20.12 | 0.36 | 0.0292 | 16.8173 | | 6 | 19.30 | 0.43 | -0.1657 | 16.7755 | | 7 | 19.03 | 0.50 | -0.3665 | 16.7615 | | 8 | 18.87 | 0.57 | -0.5805 | 16.7531 | | 9 | 18.53 | 0.64 | -0.8168 | 16.7347 | | 10 | 18.49 | 0.71 | -1.0892 | 16.7330 | | 11 | 18.11 | 0.79 | -1.4223 | 16.7122 | | 12 | 18.10 | 0.86 | -1.8698 | 16.7112 | | 13 | 17.74 | 0.93 | -2.6022 | 16.6912 | From Weibull treatment of graph paper test.xls Remember that *S*n is the probability of survival at a given stress, given by $${S\_n} = {n \over {{n\_{\rm{T}}} + 1}}$$, where nT is the total number of samples and σ is the fracture stress of the sample. The data has been ordered with the highest strength being 1, and the lowest numbered down to n. When we plot the graph of ln ln (1/Sn) vs ln (UTS), we get the following: ![Graph of Weibull analysis](images/weibull_graph.gif)  From Weibull treatment of graph paper test.xls We can then read off the graph for various values of *S*n. For instance if we wanted a 99% chance of survival, we read of from –4.60 on the y axis, which corresponds to 15.1MPa. A fully interactive version of this spreadsheet and graph tool can be downloaded . Simulation of Weibull modulus experiment In the simulation below it shows an experiment that can be done to see how a material fails in compression and the scatter of loads it requires. In the simulation above the failure crack was shown to break directly across the material in reality the material is more likely to break in more of a ‘Y shape. This phenomenon is known as compression curl or cantilever curl. There is not much quantitative data about why this occurs however this is seen when many materials break. The rod was loaded in bending, placing the top in compression and the bottom in tension. The crack would have started to grow from the side under tension. However instead of breaking straight through the crack splits and propagates to two points on the compression side creating the characteristic ‘y shape fracture. One explanation for this is, that as the crack moves through to the compressive side the stresses cannot be relaxed fast enough, as it is a high speed failure, so the crack can split and take a longer route and this is known as compression curl. The characteristic shape can be useful to identify the side that was in tension in failed samples. The images below show the results of compression curl on Perspex that was put into uniaxial compression. ![uniaxial compression on perspex](images/perspex1.jpg) ![compression curl for perspex](images/compressionCurl.jpg) Fracture path in perspex put in uniaxial compression ![after uniaxial compression on perspex](images/perspex2.jpg) Fracture pieces of perspex after it was put in uniaxial compression   When does the sample fail completely? = It is incorrect to say that **failure** must occur when G = R There will be some cracking but complete failure (as in tension) also requires that $${{{{\rm{d}}^2}U(c)} \over {{\rm{d}}{c^2}}} < 0$$  i.e. the energy is at a maximum, or $${{{\rm{d}}G} \over {{\rm{d}}c}} > {{{\rm{d}}R} \over {{\rm{d}}c}}$$ In other words failure will be catastrophic when the rate of increase of the driving force with crack growth is greater than the change in R with crack growth, which we have taken as a constant. Alternatively cracking will be stable when $${{{{\rm{d}}^2}U(c)} \over {{\rm{d}}{c^2}}} > 0$$ i.e. the energy is at a minimum, or $${{{\rm{d}}G} \over {{\rm{d}}c}} < {{{\rm{d}}R} \over {{\rm{d}}c}}$$ That is, as the crack grows, the resistance to cracking, R, increases faster than the driving force, G. Sub-critical crack growth and R-curves It is commonly observed that cracks can grow stably in a structure over a period of time. Oddly this phenomenon, known as *fatigue*, tends to occur in tougher materials. This suggests that toughening does not simply increase the magnitude of the fracture energy but changes the way in which a crack grows. Consider a material toughened by crack bridging in which intact ligaments across the crack faces are left behind as the crack grows. Toughening occurs because separating the crack faces then requires extra work in order to either stretch the ligament or pull it out of the matrix in which it is embedded. This type of mechanism occurs in many different materials, including: * rubber toughened polymer * materials containing either fibres or elongated grains This animation shows how an R-curve is generated when a crack propagates through a material containing ligaments perpendicular to the direction of crack propagation. After each critical point, the simulation will pause. It can then be continued by clicking the 'proceed' button. This type of behaviour is known as R-curve behaviour. We can see that as we load the material containing a small flaw, it will begin to grow (under an increasing applied stress intensity factor) until the process zone is fully developed. The crack in the process zone has a different shape to that outside it because of the forces that due to the intact ligaments. Once the process zone has developed fully then the whole crack will move forward with the process zone size remaining a constant size. As process zones exist in all toughened materials we might expect that they would all show R-curves and this is the case as shown below. ![Graph of the R-curve due to grain interface bridging in a silicon nitride containing elongated grains and an alumina containing silicon carbide fibres](images/r-curvesample.gif) Showing the R-curve due to grain interface bridging in a silicon nitride containing elongated grains and an alumina containing silicon carbide fibres. Summary = It has been shown that energy arguments describe how cracking occurs, but that there is more than one way of working out the energy changes. Questions = ### Quick questions*You should be able to answer these questions without too much difficulty after studying this TLP. If not, then you should go through it again!*1. The Lennard-Jones potential best describes which of the following: | | | | | - | - | - | | | a | The energy required to break a typical metal | | | b | How flaws affect the energy of a lattice | | | c | The energy associated with stretching an atom | 2. Which of the following situations is required for fast fracture to occur? | | | | | - | - | - | | | a | The crack tip is sufficiently sharp | | | b | dG/dc is greater than dR/dc | | | c | The applied stress is near the ultimate tensile strength of the material | | | d | The strain energy release rate G is equal to the fracture energy R. | 3. In the wedging setup, which of the following will \*not\* influence the critical crack length? | | | | | - | - | - | | | a | The position of the wedge | | | b | The stiffness of the material | | | c | The thickness of the peeling layer | | | d | The surface energy of the material | 4. Which one of Irwin's modes of fracture best describes a car tyre skidding on a dry road? | | | | | - | - | - | | | a | Mode I | | | b | Mode II | | | c | Mode III | 5. What is the critical stress intensity factor equal to when no residual stresses are present? | | | | | - | - | - | | | a | Fracture toughness | | | b | Crack resistance | | | c | Ultimate tensile strength | | | d | Fracture energy | 6. What best describes the situation in wedging? | | | | | - | - | - | | | a | As the wedge position is varied, it is always energetically favourable for the crack to return to an equilibrium length | | | b | The fracture energy dominates at small crack lengths | | | c | If the crack becomes very long in comparison to the geometry of the setup, unstable fast fracture may occur | Going further = ### Books A best overall book on cracking is that by Brian Lawn, *Fracture of brittle solids*, 1993, Cambridge University Press. You should also look at *Molecular adhesion and its applications* by Kevin Kendall. For the metalllurgist, the short monograph by Withers & Knott is exceptionally useful.
Aims On completion of this TLP you should: * Understand the meaning of the Biot number, how it affects the temperature profile in a casting, and the resulting microstructure. * Be able to explain the formation of the microstructure observed in a cast ingot. * Be familiar with some common methods of casting, their advantages and disadvantages, and be able to choose a suitable process for manufacturing a variety of metallic components. Before you start It will be helpful to have an understanding of solute partitioning and the formation of dendrites. The TLP on covers these topics. Introduction Casting processes are essential in forming solid ingots or billets of metals for further processing, but moreover, a wide variety of casting processes are used in the production of finished or semi-finished components. Casting allows the production of complex shapes, without the need to introduce weaknesses when joining separate pieces, and is conservative of material compared with machining or cutting a shape from a large piece of metal. The methods available can achieve various standards of surface finish, microstructure, start-up cost, unit cost, and production volume, in order to suit many applications. Heat transfer = The rate of solidification of a metal during casting is dictated by; * The excess heat in the liquid metal on pouring, * The amount of heat produced by the solidification of the metal (the latent heat of fusion), * The rate at which this heat can be dissipated from the metal. A simple way to predict the way in which a casting will solidify, is using the Biot number, Bi, given by: \[Bi = \frac{h}{K / L}\] where h is the between the metal and the mould wall, K/L is the thermal conductance of the casting, calculated from K, the thermal conductivity of the liquid metal, and L, the length of the casting in the direction of the heat flow. When the Biot number is large, heat is transferred quickly from the metal to the mould, but takes longer to reach the mould wall from the centre of the casting, resulting in a significant temperature gradient in the casting, and only a small temperature difference across the interface. When the Biot number is small, the transfer of heat to the edge of the casting is faster than the transfer of heat from the metal to the mould, resulting in a large temperature difference across the interface, and only a small temperature gradient within the casting. This is illustrated in the movie below: For the situation where Bi << 1, we can consider the cooling and solidification separately. Firstly, the liquid metal cools uniformly from its pouring temperature, *T*p, to its melting temperature, *T*m We know that: q = hΔT and \[q = \frac{{dT}}{{dt}}{C\_V}L\] so that \[\frac{{{\rm{d}}T}}{{{\rm{d}}t}} = \frac{{h\Delta T}}{{L{C\_V}}}\] where dT / dt is the cooling rate, and CV, is the heat capacity of the liquid. The time, tC,  taken for the casting to reach Tm is given by: \[{t\_C} = \frac{{L{C\_V}}}{h}\ln \left| {\frac{{{T\_W} - {T\_M}}}{{{T\_W} - {T\_P}}}} \right|\] where TW is the temperature of the mould wall. To see how this is derived . The solidification stage can be understood by equating the heat transferred across the interface, with the heat generated by the solidification: *q* = hΔT q = vΔHF so that \[v = \frac{{h\Delta T}}{{\Delta {H\_F}}}\] where v is the speed of the solidification front, and ΔHF is the latent heat of fusion of the metal. The time, tS, taken for the metal to solidify once it has reached Tm is given by: \[{t\_S} = \frac{{\Delta {H\_F}L}}{{h({T\_W} - {T\_M})}}\] The total time taken for the casting to solidify once the metal has been poured in is: \[t = \frac{L}{h}\left( {{C\_V}\ln \left| {\frac{{{T\_W} - {T\_M}}}{{{T\_W} - {T\_P}}}} \right| - \frac{{\Delta {H\_F}}}{{\left( {{T\_W} - {T\_M}} \right)}}} \right)\] Heat transfer simulation Here we demonstrate how heat transfer through the mould wall determines the temperature change in a casting during solidification. The simulation assumes ***Newtonian cooling*** where heat transfer is limited by the interface between the metal and the mould. The simulation shows the effect of varying parameters such as the interfacial heat transfer coefficient, h, the casting length, L and the amount of superheat (determined by the pouring temperature, Tp). It can be carried out for a wide range of metals to study the effect of properties such melting temperature Tm, latent heat of fusion per unit volume, ΔHf,V, thermal conductivity, K and heat capacity per unit volume, Cp,V. Click to see how the heat capacity per unit volume is related to specific heats measured relative to other amounts. The relevant thermal properties of several pure metals are shown below| | **Melting temperature** | **Latent Heat of fusion** | **Volumetric Heat capacity** | **Thermal conductivity** | | | K | MJ m-3 | kJ m-3 K-1 | W m-1 K-1 | | Ag | 1235 | 1100 | 2920 | 422 | | Au | 1337 | 1200 | 2800 | 272 | | Al | 933 | 1070 | 3100 | 240 | | Cu | 1358 | 1842 | 4080 | 395 | | Mg | 922 | 640 | 2240 | 154 | | Pb | 601 | 260 | 1590 | 34 | | Sn | 505 | 440 | 1780 | 63 | | Zn | 693 | 812 | 3070 | 112 | In the simulation you can select any of these materials and its properties will be displayed. You may then vary parameters such as the casting length, the interfacial heat transfer coefficient between the solid and the mould wall and the pouring temperature in order to see how long it takes for the casting to solidify and cool. There are a couple of provisos: Firstly because this is a Newtonian cooling simulation you must ensure that the Biot number is low enough for this assumption to be valid (hence for the metals which have poor thermal conductivities, you must keep the casting length relatively low). Secondly, of course, you must ensure that the metal is poured above its melting temperature! There are a number of we are making in this simulation which may not be the case for a real casting. Microstructure and segregation in castings Microstructure of castings The animation below shows how the microstructure of an ingot develops during solidification. For a closer look at dendrite formation look at this in the Solidification of alloys TLP. Below is a picture of a real cast Al ingot. Move your mouse over the various parts to see how they are formed. Segregation in castings - When casting an alloy, segregation occurs, whereby the concentration of solute is not constant throughout the casting. This can be caused by a variety of processes, which can be classified into two types: Microsegregation; which occurs over distances comparable to the size of the dendrite arm spacing. This occurs as a result of the first solid formed being of a lower concentration than the final equilibrium concentration, resulting in partitioning of the excess solute into the liquid, so that solid formed later has a higher concentration. More about microsegregation can be found in the TLP on . Macrosegregation occurs over similar distances to the size of the casting. This can be caused by a number of complex processes involving shrinkage effects as the casting solidifies, and a variation in the density of the liquid as solute is partitioned. We will not discuss these processes further. It is desirable to prevent segregation during casting, to give a solid billet that has uniform properties throughout. Microsegregation effects can be removed after casting, by homogenisation, carried out at by annealing at high temperatures where the diffusivity is higher. Macrosegregation effects occur over larger distances so cannot be removed in this way, but can be reduced by control of the casting process and mixing during solidification, often by electromagnetic stirrers. Ultrasound is sometimes used to break up dendrites as they grow, reducing the scale of the dendritic structure and the extent of microsegregation. Sand casting Sand castings are formed using compacted sand as the mould, more details of the process can be found in the movie below: Typical components produced by sand casting are automobile engine blocks and ship propellers. **Advantages**: * Low capital investment means that short production runs are viable. * Use of sand cores allows fairly complex shapes to be cast. * Large components can be produced. **Disadvantages**: * The process has a high unit cost, as it is labour intensive and time consuming. * The sand mould leaves a poor surface finish, which often requires further processing. * Cannot make thin sections. ![The drag and cope of a casting flask](images/copedrag.jpg) The drag (left) and cope (right) of a casting flask ![Snad casting](images/sandcasting4.jpg) A sand casting with risers and the runner system still attached ![Crankshaft](images/crankshaft.jpg) A crankshaft for an auto engine which has been produced by sand casting Different types of sand are used in sand casting; * Petro-bond – This is a mixture of quality sand and oil or synthetic resin. * Green sand – A mixture of sand, clay, water and sometimes other additives. It is called green sand because it is re-usable. The right amount of water has to be added to prevent porosity. * Sand mixed with water glass (NaO.nSiO2.mH2O) can be hardened with CO2 gas through the chemical reaction; Na2O.nSiO2.(mn + x)H2O + CO2 = Na2CO3.xH2O + n(SiO2.mH2O) This transforms the sand into a solid mould, which can be used after the cope & drag are removed. * A parting powder similar to talcum powder is put on the pattern to make it easier to remove. More complicated castings can be made by producing a pattern out of polystyrene foam, which is left in place when the molten metal is added. When the metal is poured into the mould the heat vaporizes the foam a short distance away from the metal surface, allowing the metal to fill the mould cavity. Permanent Mould casting - The basic process of permanent mould casting is similar to that for sand casting, except that a permanent metallic mould is used. This mould is made of two parts, containing the necessary gates and risers to ensure the correct flow of metal. The metal moulds give a better surface finish than sand casting, and selective cooling rates can give control over the scale and morphology of microstructure formed. Die casting = Die casting is an automated process that forms castings under high pressure in a metal mould. More details of the process can be found in the movie below: Typical items produced by die casting are toys and small precision parts such as sprockets and gears. Aluminium, copper and zinc alloys are commonly used in die casting, ferrous alloys are harder to cast in this way as the moulds are made from hardened steel. There are various methods of die casting; * Gravity fed – Similar to permanent mould casting, the material is poured into the top of the mould & forced downwards by gravity. * Pressure – The metal is forced into the mould under pressure. * Cold chamber – Molten metal is poured into the system and then forced into the mould. * Hot chamber – The metal is heated within the system & then forced from the crucible into the die. * Squeeze – Similar to injection moulding, a given amount of material is forced into the mould. **Advantages:** * Very low unit cost. * High definition & surface finish. * Excellent dimensional accuracy. * Cool metal mould gives fast solidification, leading to a fine grain structure. * Can produce thin sections. **Disadvantages:** * A large capital investment is required to set up a die casting process. * It is difficult to control the microstructure of the solid. * The alloys used must have a low melting point, often at the expense of other properties, such as strength and stiffness. * Cannot be used for complex shapes, as the casting couldnt be ejected from the mould. * Cannot be used for large castings. Continuous casting Continuous casting is used to produce very large quantities of metals in simple shapes, by casting & rolling the metal continuously in one process. The slabs are then generally further processed by methods such as rolling, extrusion or another casting method to make more useful items. More about process methods can be found in the TLP on . **Advantages:** * High yield of casting for a given volume of liquid, mainly thanks to the lack of a contraction pipe. * Good surface finish. * Extremely low unit cost due to the very high volume of metal that can be cast. **Disadvantages:** * A large capital investment is required to set up the process. * Only simple shapes can be cast, which must have a constant cross-section. Investment casting Investment casting is used to make precision parts with a good surface finish. It is used to make turbine blades from a single crystal Ni-based superalloys. As the moulds are made from ceramic, metals with high melting points can be cast in this way. * A pattern of the required component is formed out of wax usually through injection moulding into a metal mould. A runner and gate are included within the wax pattern. Pre formed ceramic cores can also be added, so that hollow castings can be made. * Many wax patterns are then connected together to form a tree, and this is then dipped into a ceramic slurry, which sets through drying. Different layers of ceramic are created around the wax patterns, increasing in coarseness. * The wax patterns are then melted out, ready for the molten metal to be poured in to create the castings. * The ceramic mould is broken to reveal the castings, and any ceramic cores added at the start can be chemically etched out to leave a hollow structure. | | | | | - | - | - | | Investment wax | Investment mould | Investment blades | | Investment wax | Investment mould | Investment blades | Your browser does not support the video tag. Video of investment casting**Advantages** * Metals with a high melting temperature can be cast due to ceramic mould * Complex shapes can be formed by using ceramic liners in the original wax patterns * Good surface finish can be obtained using fine ceramic material **Disadvantages** * Expensive as mould cannot be reused * Time consuming (drying times for ceramic range roughly 24 hrs) Other methods of casting Melt Spinning - Melt spinning is the rapid cooling of a metal to form a metallic glass ribbon. A cooling rate of 107 K s-1 can be achieved by casting onto a water-cooled, rotating Cu wheel. Heat extraction is very rapid, and thin sections are formed (~50 mm), so Newtonian cooling is observed due to the small length scale. A slowed down video of the process is shown below; Your browser does not support the video tag. A slowed down video of melt-spinningThe fast cooling rate means that amorphous metal (metallic glass) is formed. This has very different properties compared to the alloy with a crystal structure. The metallic glass ribbon is often used in electronic applications, as when a magnetic metal (iron, cobalt, nickel) is used within the casting alloy, the amorphous material forms a magnet with low coercivity, due to the lack of regular structure. This means that the magnetic field can be easily removed. Centrifugal Casting - Centrifugal casting is used to create hollow, cylindrical parts. The mould rotates, and is only partially filled with molten metal depending on the required thickness. The centrifugal force generated by the mould forces the metal against the wall, giving a good surface finish. Centrifugal forces can also be used in casting when creating solid objects, the force being used instead of pressure or gravity to push the metal into the mould. **Advantages** * Good external detail * High control of the microstructure * No core required to make hollow component **Disadvantages** * Poor internal surface quality * Inner cross section must be circular Summary = There are 4 main ways of casting metals; sand casting, die casting, continuous casting and investment casting. Each has advantages and disadvantages, and is suited for producing different components or using different materials. The Biot number predicts how a casting will solidify, and is equal to \(\frac{h}{K / L}\), where: h = heat transfer coefficient            L = length of the casting            K = thermal conductivity Commonly 3 zones form as the casting solidifies; * Chill zone – small crystals form on mould wall. * Columnar zone – larger grains growing in optimum crystallographic orientations. * Equiaxed zone – small grains that solidified early but detached and moved back into the liquid. Questions = ### Quick questions*You should be able to answer these questions without too much difficulty after studying this TLP. If not, then you should go through it again!*1. What is the Biot number equal to? | | | | | - | - | - | | | a | hL/K | | | b | Kh/L | | | c | K/Lh | | | d | h/KL | 2. What is h? | | | | | - | - | - | | | a | heat transfer coefficient | | | b | heat flux | | | c | heat capacity | | | d | latent heat | | | e | thermal conductivity | 3. Which is the first solid to form in a casting? | | | | | - | - | - | | | a | Equiaxed zone | | | b | Columnar zone | | | c | Chill zone | | | d | Pipe | 4. Which casting method uses a wax pattern? | | | | | - | - | - | | | a | Sand casting | | | b | Die casting | | | c | Investment casting | | | d | Continuous casting | 5. Which gas is often used to harden sand casting moulds ? | | | | | - | - | - | | | a | Hydrogen | | | b | Carbon Monoxide | | | c | Oxygen | | | d | Carbon Dioxide | 6. Cold chamber is a type of which casting method? | | | | | - | - | - | | | a | Sand casting | | | b | Die casting | | | c | Investment casting | | | d | Continuous casting | Going further = ### Books *Manufacturing with Materials* by Lyndon Edwards, Mark Endean Butterworth-Heinemann Limited, ISBN 0750627549 (0-7506-2754-9) Chapter 2 is an excellent coverage of casting ### Websites This site contains more information about continuous casting, and a very advanced simulation of the process. Prof. H.K.D.H. Bhadeshia's
Aims On completion of this tutorial you should: * Appreciate the meaning of a misfit strain between the stress-free (in-plane) dimensions of two layers * Understand how misfit strains can arise and how they give rise to stresses and strains in the layers, depending on their elastic properties and thicknesses * See how the general case is simplified when one of the layers (a coating) is much thinner than the other (a substrate) and understand the Stoney equation * Be able to predict the curvature that tends to arise in such systems * Be able to predict whether debonding is likely to occur, given the value of the interfacial fracture energy Before you start It may be helpful to study the before you start, particularly the page covering Bending Moments and Beam Curvatures. Introduction This TLP covers some basic mechanics of multi-layered systems. It relates primarily to a 2-layer system, such as a coating on a substrate, although it should be fairly clear how the concepts involved can be extended to multi-layer systems. The treatments presented are for the general case, although the behaviour expected for the special case of one layer being much thinner than the other (eg a thin coating on a massive substrate) is also described. The TLP is focused on the distributions of (in-plane) stress and strain that can arise within the two constituents under various imposed conditions. These could include applying well-defined external forces, such as an in-plane load or a bending moment. In those cases, the response of the system (in-plane extensions or out-of-plane curvatures) can be obtained from well-known expressions related to the mechanics of composites or of beam bending. However, in layered systems, such as coatings on substrates, the imposed conditions are often more complex than this. For example, heating or cooling of the system will lead to differential thermal expansion or contraction, generating a *misfit strain* - thats to say, a difference between the stress-free (in-plane) dimensions of the two constituents. Misfit strains, which can also arise in other ways (such as plastic deformation of one of the layers), are important in the mechanics of layered systems. They can give rise to relatively complex through-thickness distributions of stress and strain, and also to both in-plane length changes and out-of-plane curvature. It may also be noted that misfit strains can arise simultaneously in more than one in-plane direction. In fact, it is common for a given misfit strain to be generated in all in-plane directions - this would normally be the case for differential thermal contraction, assuming in-plane isotropy of the thermal expansion coefficients. As a consequence of Poisson effects, this effectively raises the in-plane stiffness (ratio of stress to strain in any given direction), which can be handled by simply using a *biaxial stiffness* in place of the conventional one. Misfit strains The Concept of a Misfit Strain An important concept in layered (and other) systems is that of a misfit strain - ie a difference between the stress-free dimensions of two or more constituents that are bonded together. It is relevant to composite materials and also to macroscopic systems such as two or more components that are bolted or welded together in some way. In general, this strain is a tensor, with principal axes and three principal values. For a layered system, however, the focus is on a single (in-plane) direction, so that the strain can be treated as a scalar. A simple type of misfit strain is that arising from differential thermal expansion (in a 2-layer system). In general, one layer will expand more during heating than the other (in the direction concerned). If the two layers were not bonded together, then they would behave as shown in the figure below. The misfit strain (in the x-direction) is given by the product of the difference in expansivity between the two constituents and the temperature change. It is often written as Δε. The fact that the two layers are actually bonded together leads to creation of internal stresses and strains, and to changes in the shape of the system - see the next page. ![misfit strain](images/misfit_strain.jpg)Sources of Misfit Strains - What are effectively misfit strains can arise in a number of ways. One of the simplest is differential thermal contraction, but anything that creates a difference between the (in-plane) stress-free dimensions of a substrate and a coating (or a surface layer of a substrate) has a similar effect. These include phase changes, plastic deformation, creep etc, as well as phenomena, such as atomic bombardment, that can create stresses during formation of a coating. * \( \Delta \alpha \Delta T \) * Phase transformations (eg solidification, resin curing, martensitic transformations) * Plastic deformation (eg shot peening) * Creep * (Such deformation can also modify existing Δε values.) Force and moment balances = Force Balance - The fact that the two layers are in fact bonded together must now be taken into account. Assuming first that the layers remain flat, there is clearly a requirement that both must end up with the same length (in the x-direction). This will require a tensile stress to be created in one layer (the one that is initially shorter) and a compressive stress in the other. The forces acting in each layer (in the x-direction) must add up to the externally applied force (ie a *force balance* must apply). Since there is no applied force in the case we are currently considering, the forces in the two layers must sum to zero. In addition, the misfit strain must be partitioned in some way between the two layers. There are thus two simultaneous equations, allowing the solution to be found. This is illustrated in the figure below. (The subscripts d and s represent “deposit” and “substrate” - ie the upper and lower layers.) ![](images/force_balance.jpg) Moment Balance The force balance is relatively straightforward, but the situation depicted in the figure above does not represent complete static equilibrium. This is because the forces being exerted by the stresses acting in the two layers produce a bending moment that tends to create curvature in the x-y plane - for the case shown, the top surface would become convex. The distributions of stress and strain must therefore change, creating an internal balancing moment (and also creating curvature), while maintaining a force balance. Using the moment balance to find the curvature (and the associated stress and strain distributions) is slightly more complex than applying the force balance. Derivation of the . The outcome is: \[ \kappa = \frac{{6{E\_d}{E\_s}\left( {{h} + {H}} \right){h}{H}\Delta \varepsilon }}{{{E\_d}^2{h}^4 + 4{E\_d}{E\_s}{h}^3{H} + 6{E\_d}{E\_s}{h}^2{H}^2 + 4{E\_d}{E\_s}{h}{H}^3 + {E\_s}^2{H}^4}} \] Its important to recognize that the curvature of a beam is equal to the *through-thickness gradient* of the associated (in-plane) *strain distribution*. The other key concept here is that of the *neutral axis* of the beam. This is the location - strictly, its a plane (in 3-D), rather than an axis - where no (in-plane) strains arise from the bending (adoption of curvature). Its location, for a 2-layer system, is . The result is: \[ \delta = \frac{{\left( {{h^2}{E\_d} - {H^2}{E\_s}} \right)}}{{2\left( {h{E\_d} + H{E\_s}} \right)}} \] Using these equations, the final outcome (of imposing a misfit strain) can be obtained. This is illustrated in the figure below. It should be noted that adoption of curvature does NOT lead to zero strain at the neutral axis, but rather to NO CHANGE there - ie it remains at the value resulting from the force balance. (If we simply applied an external bending moment, rather than an internal misfit strain, then there would be no strain at the neutral axis.) ![](images/moment_balance.jpg) The biaxial modulus =Biaxial Stress States - So far, attention has been concentrated on a single (in-plane) direction. For conventional (in-plane) uniaxial loading of a bilayer sample, or for applying a bending moment to it, this is appropriate. Its also possible to generate a misfit strain in a single (in-plane) direction. However, while this is possible, its actually rather unusual. More commonly, the same misfit strain is generated simultaneously in all in-plane directions, a state that can be represented by creating the strain in two arbitrary (in-plane) directions that are normal to each other. This will lead to an *Equal Biaxial* stress state (since all in-plane directions are clearly equivalent and the through-thickness stress, σy, is often taken to be zero - there is no normal stress at a free surface). Differential thermal contraction would normally have this effect. Its also possible to create an *Unequal Biaxial* stress state. This would arise, for example, during differential thermal contraction with one or both of the layers exhibiting in-plane anisotropy in thermal expansivity, so that the misfit strain would be different in different in-plane directions. (Also, anisotropy in stiffness would lead to different stresses in different in-plane directions, even if the misfit strains were equal.) It is, however, common to at least assume that all in-plane directions are equivalent, in terms of both properties and misfit strains. Poisson Effects The main reason why the case of an equal biaxial misfit strain differs from that of a uniaxial one is related to Poisson effects. The strains arising in the selected in-plane direction (the x-direction) will be accompanied by Poisson strains in the other two (principal) directions. That in the through-thickness (y) direction is often of little consequence, but in the other in-plane (z) direction, it will need to be added to the outcome of the effects arising in that direction (from the misfit strain in that direction). The upshot of this is actually rather simple. By symmetry, the two in-plane stresses (and strains) must be equal - ie σx = σz. For isotropic elastic properties and no through-thickness stress (σy = 0), the strain in the x-direction can be written in terms of the three principal stresses: \[{ε\_x}E = {\sigma \_x} - \nu \left( {{\sigma \_y} + {\sigma \_z}} \right) = {\sigma \_x}\left( {1 - v} \right)\] where ν is the Poisson ratio. The ratio of stress to strain in the x-direction (and all in-plane directions) can therefore be expressed \[ \frac {{\sigma \_x}}{\epsilon\_x} = \frac {{E}}{ \left(1 - \nu \right)} = E{^{'}} \] This modified form of the Youngs modulus, E (often termed the *Biaxial Modulus*), is applicable in expressions referring to substrate/coating systems having an equal biaxial stress state. The effective stiffness (stress/strain ratio) has been raised by this Poisson effect. This higher value should be used in place of E throughout the formulations in the preceding pages (when the misfit strain is generated in all in-plane directions). The Stoney Equation - the Thin Coating Limit Origin of the Stoney Equation - The Stoney equation is still in widespread use. It relates the curvature of a substrate with a thin coating to the stress level within the coating (for an equal biaxial case). It was proposed in 1909 - a long time before the relationship described in earlier pages (for the general case in which the coating thickness is not negligible compared to that of the substrate) was established. However, it is easy to show that the Stoney equation can be derived from that relationship, by imposing the h << H condition, which allows all of the denominator terms except the last one to be discarded and (h + H) = H to be assumed, so that \[ \kappa = \frac{{6{E{^{'}\_\rm{d}}}h \Delta \epsilon }}{{{E{^{'}\_\rm{s}}}{H^2}}} \] with the biaxial moduli now being used. Furthermore, the h << H condition allows the assumption to be made that all of the misfit strain is accommodated in the coating (deposit), so only the coating is under stress. In addition, the Stoney equation is based on an equal biaxial stress state, so that the biaxial versions of these Youngs moduli should be used and the misfit strain can be expressed as \[ \Delta \epsilon = \frac{{−\sigma\_d}}{E{^{'}\_\rm{d}}} \] recognizing that, with the convention we are using for \( \Delta \epsilon \), a positive value will generate a negative value for \( \sigma\_\rm{d} \) (ie a compressive stress). Substitution of this then leads to the Stoney equation: \[ \kappa = \frac{{−6{E{^{'}\_\rm{d}}}h \sigma\_\rm{d} }}{{{E{^{'}\_\rm{s}}}{H^2}{E{^{'}\_\rm{d}}}}} = \frac{{−6h \sigma\_\rm{d} }}{{{E{^{'}\_\rm{s}}}{H^2}}} \;\;\; ∴\; {\sigma\_\rm{d}} = \frac{− {E{\_\rm{s}}{H^2}}}{6h \left( {1 − {\nu\_\rm{s}}} \right)}\kappa \] (The minus sign is not always included, but the curvature should have a sign and, using the convention that a convex upper surface corresponds to positive curvature, this implies a negative deposit stress.) This equation allows a coating stress to be obtained from a (measured) curvature. Only \( E \) and \( \nu \) values for the substrate are needed - this is convenient, since they are often known (whereas those of the coating may not be). A single stress value is obtained - if the coating is thin, then any through-thickness variation in its value is likely to be small. It is really the misfit strain that is the more fundamental measure of the characteristics of the system, but a stress value is often regarded as more easily interpreted. Approach to Stoney Conditions - The Stoney equation is easy to use and, indeed, is widely used. However, it does have the limitation of being accurate only in a regime in which the curvatures tend to be relatively small. In some applications – such as with semiconductor wafers – surfaces are very smooth, so that highly accurate optical methods of curvature measurement are feasible and this is not such a problem. However, when curvatures are high (or need to be high for reliable measurement), the Stoney equation should not be used. The simulation below allows exploration of the conditions under which the Stoney equation gives a good or poor approximation to the actual behaviour. It should be noted that, while the stresses are scale-independent, the curvatures are not - changing the substrate thickness thus affects the plots on the left, but not those on the right. Spallation (Interfacial Debonding) Driving Force for Spallation The presence of residual stresses in a substrate/coating system constitutes a driving force for debonding (spallation), since such stresses will almost certainly be at least partially relaxed when this occurs, releasing stored elastic strain energy. The key process is that of propagation of a crack along the interface, driven by the associated release of this stored energy. This propagation is illustrated below, for a (Stoney) case in which there is just a single (uniform) stress in the coating. ![](images/debonding3.jpg) It can be seen that propagation of this interfacial crack will be energetically favoured if the driving force (strain energy release rate) is equal to or greater than the (mode II - ie shearing mode) fracture energy of the interface, Gic: \[ E{^{'}}\_d \epsilon \_d^2 h \left( = \frac{{\sigma \_d^2}h}{{E\_d^{}}} \right)\geq G\_{\rm{ic}} \] This takes no account of any barrier to initiation of the crack. In many cases, however, there are likely to be relatively large defects present in the interface, so the above condition may well lead to spallation. It certainly means that the coating is (thermodynamically) unstable. Immediate implications are (unsurprisingly) that high stresses and brittle interfaces (low Gic) make debonding more likely. Also clear (and widely observed) is that thicker coatings are more likely to debond than thinner ones. Debonding for non-Stoney Cases The same concept can be applied to more general cases - ie when the h<<H condition does not apply. Debonding will still tend to allow a reduction in the stored elastic strain energy, constituting a driving force for spallation. However, since there is now a distribution of stress (and strain) in the through-thickness (y) direction, an integration is needed to evaluate the driving force \[ {G\_i} = \mathop \smallint \limits\_{ - H}^h \frac{{\sigma {{\left( y \right)}^2}}}{{E^{}\left( y \right)}}{\rm{d}}y \] This is based on the assumption that these stresses become totally relaxed during debonding. This might be the case - for example, the sets of stresses and strains that have been predicted to arise from imposition of a uniform misfit strain would completely disappear if the coating could debond. In practice, however, the stress distribution within the system may be more complex than this. For example, its common for stresses to be created in a coating during its formation. As such a coating gets thicker, balancing of forces and moments takes place progressively, so that conditions change and the final stress distribution is not one corresponding to imposition of a uniform misfit strain. In such cases, debonding may leave some residual stresses (and residual curvature of the coating and possibly of the substrate). The net driving force may then be written \[ {G\_i} = \mathop \smallint \limits\_{ - H}^h \frac{{\sigma {{\left( y \right)}^2}}}{{E^{}\left( y \right)}}{\rm{d}}y - \mathop \smallint \limits\_{ - H}^h \frac{{{\sigma \_r}{{\left( y \right)}^2}}}{{E'\left( y \right)}}{\rm{d}}y \] where σr(y) is the residual stress distribution after debonding. Debonding of coatings is, of course, commonly observed, since virtually all coatings contain at least some stresses and the associated strain energy can rise above the critical level - for example, as a result of temperature changes, thickening (eg oxide growth), stiffening (eg due to sintering), applied forces or bending moments etc. Also, the toughness (fracture energy) of the interface (Gic) may fall with time - eg due to chemical attack etc. The video clips below give some examples of how coating spallation can be observed and analysed. Your browser does not support the video tag. **Spontaneous Debonding.** This video shows a set of coated samples (zirconia on alumina substrates) being withdrawn from a furnace and cooled by gas jets. The cooling creates differential thermal contraction stresses and, once these have become sufficiently large, coatings can spontaneously debond. Such an event can be seen at the end of this short video. Your browser does not support the video tag. **In Situ Curvature Measurement**. This video, which has a commentary, explains how, provided the substrate is relatively thin, the deposition of a coating (in this case by thermal spraying of ceramics onto metal substrates) generates curvature, which can be monitored as the coating thickness increases. In conjunction with a numerical model, this can be used to infer the residual stress levels in coatings produced under a range of conditions. Your browser does not support the video tag. **Debonding during Cooling of Thin Substrate with Coatings.** This video, which has a commentary, explains how stored residual stresses provide a driving force for debonding (spallation). It is also explained how estimates can be made of the interfacial toughness (fracture energy) from observations of debonding during cooling. Your browser does not support the video tag. **Debonding under Applied Load of Coatings with Residual Stresses.** This video, which has a commentary, goes briefly through the 4-point bend delamination test for coatings that already contain stored residual stresses. It is shown how this can be used to measure the interfacial fracture energy. Summary = A coating on a planar substrate is an important special case of a composite system. Attention is concentrated here on the mechanical (stress-related) effects that commonly arise, with the possibility of stresses (in-plane only) and strains within both constituents being generated in a variety of ways. In the treatment presented here, edge effects are ignored, which simplifies matters, but considerable attention is paid to the way in which the system may become curved, and to the relationship between the curvature and the internal stresses and strains. Two main cases are considered, depending on whether it is possible for the assumption to be made that the coating is very much thinner than the substrate. If this is the case, then certain assumptions can be made and some relatively simple equations can be used to describe the behaviour. On the other hand, it is emphasized that there are many practical cases for which this condition cannot be assumed, although (slightly more complex) analytical treatments can then be employed and these are described here in some detail. It is also worth noting that a further distinction can be drawn, depending on whether the stress state within the coating (and substrate) comprises a uniaxial (in-plane) stress or an equal biaxial set of (in-plane) stresses, with the latter being much more common. In this case, it is explained that the ratio of stress to strain in any given (in-plane) direction is given by the biaxial modulus, rather than the Youngs modulus (for a uniaxial stress state). Finally, a brief outline is presented of how the stored elastic strain energy associated with the presence of stresses in a coating (and substrate) constitutes a driving force for spallation (interfacial crack propagation). A simple criterion is presented for advance of such a crack and some examples are given of how this can be utilized in some practical cases. Questions = ### Quick questions*You should be able to answer these questions without too much difficulty after studying this TLP. If not, then you should go through it again!*1. What is meant by a "misfit strain"? | | | | | - | - | - | | | a | A strain corresponding to the difference in stress-free dimensions of two constituents that are bonded together in some way. | | | b | A strain that needs to be imposed on a coating in order to make it stress-free. | | | c | A strain resulting from imposition of the condition that coating and substrate remain bonded together. | | | d | A strain that arises between coating and substrate as a result of temperature change, | 2. Why does curvature tend to arise in a coating-substrate system, as a result of the stresses in each constituent caused by imposing a force balance? | | | | | - | - | - | | | a | The curvature arises because it allows the stress in the coating to be completely relaxed. | | | b | The force balance cannot be satisfied unless it is accompanied by curvature. | | | c | There is a lateral separation between the axes along which the corresponding forces act, so a bending moment is generated, creating a curvature. | | | d | The forces that arise act along axes that are not parallel, so a bending moment is generated, creating a curvature. | 3. Which of these conditions is sufficient to ensure that the Stoney equation is a good approximation for the relationship between coating stress and curvature? | | | | | - | - | - | | | a | The stress state is an equal biaxial one. | | | b | The coating is thinner than the substrate. | | | c | The stiffness of the coating is lower than that of the coating | | | d | The magnitude of the average stress in the coating is much greater than that in the substrate. | 4. What is meant by the "Biaxial Modulus" of a coating (or a substrate) and why is it greater than the conventional (Young's modulus)? | | | | | - | - | - | | | a | It corresponds to the stress-strain ratio in any in-plane direction, for an equal biaxial stress state, and it is larger than the Young's modulus because of Poisson effects. | | | b | It corresponds to the stress-strain ratio (in an elastic system) in any in-plane direction, for an equal biaxial stress state, and it is larger than the Young's modulus because of Poisson effects. | | | c | It corresponds to the stress-strain ratio (in an elastic system) in any in-plane direction and it is larger than the Young's modulus because of Poisson effects. | | | d | It corresponds to the stress-strain ratio (in an elastic system) in any in-plane direction, for an equal biaxial stress state, and it is larger than the Young's modulus because of the Poisson contraction arising from a stress in another in-plane direction. |### Deeper questions*The following questions require some thought and reaching the answer may require you to think beyond the contents of this TLP.*5. How is the neutral axis (strictly, the neutral plane) of a beam defined? | | | | | - | - | - | | | a | It is the line along which, when an initially unstressed beam is subjected to a bending moment, no strains arise while the beam becomes curved. | | | b | It is the line along which there are no shear strains in the beam. | | | c | It is the line along which there is no strain when the beam becomes curved. | | | d | It is the line along which there is no stress when the beam becomes curved. | 6. Which of these definitions of the curvature of a beam is correct? | | | | | - | - | - | | | a | Beam stiffness times applied moment. | | | b | Through-thickness gradient of the in-plane stress (in the plane of curvature) induced by the adoption of curvature. | | | c | Beam thickness divided by radius of curvature. | | | d | Through-thickness gradient of the in-plane strain (in the plane of curvature) induced by the adoption of curvature. | 7. Which of these statements regarding debonding (spallation) of coatings is incorrect? | | | | | - | - | - | | | a | If the residual stress in a coating is compressive, rather than tensile, then it is less likely to debond from the substrate. | | | b | Coatings are more likely to debond if the coating / substrate interface is brittle (has a low toughness). | | | c | The main driving force for debonding of many coatings is the stored elastic strain energy associated with the residual stresses in them. | | | d | Thick coatings tend to debond more readily than thin ones. | Going further = ### Books There are not really any books that specifically cover all of the material in this TLP, although there are, of course, books that treat various aspects of coatings, and of surface engineering more generally. The three below respectively present a review of developments since the original Stoney equation, modelling of progressive deposition of a coating onto a thin substrate and the anisotropy that arises with thin films on (cubic) single crystal substrates. MR Begley & JW Hutchinson, *The Mechanics and Reliability of Films, Multi-layers and Coatings*, CUP (2017) ISBN: 9781107131866. J Mencik, *Mechanics of Components with Treated or Coated Surfaces*, Springer (2010) ISBN-13: 978-9048146116. L. B. Freund & S. Suresh, *Thin Film Materials: Stress, Defect Formation and Surface Evolution*, CUP (2004) ISBN-1139449826 ### Other resources There are many journal papers that cover the issue of stresses and strains in surface coatings and also the broader topics of how they can provide various kinds of protection, including thermal, environmental and tribological. The two below respectively present a review of developments since the original Stoney equation and an outline of how progressive deposition of a coating onto a thin substrate can be treated. GCAM Janssen, MM Abdalla, F van Keulen, BR Pujada & B van Venrooy, *Celebrating the 100th Anniversary of the Stoney Equation for Film Stress: Developments from Polycrystalline Steel Strips to Single Crystal Silicon Wafers*, Thin Solid Films, **517** (2009) p.1858-1867, YC Tsui & TW Clyne, *An Analytical Model for Predicting Residual Stresses in Progressively Deposited Coatings .1. Planar Geometry*, Thin Solid Films, **306** (1997) p.23-33, KM Knowles, *The Biaxial Moduli of Cubic Materials Subjected to an Equi-biaxial Elastic Strain*, J. of Elasticity, 124 (2016) p.1-25,
Aims On completion of this TLP package, you should: * Have an appreciation of the mechanisms involved in creep deformation, and their general dependence on temperature and stress * Understand what is meant by Primary, Secondary and Tertiary Creep * Know how creep curves can be represented by (empirical) constitutive laws, and how the values of parameters in them, such as stress exponents and activation energies, can be obtained from experimental data * Know how a conventional uniaxial creep test is carried out * Be familiar with a particular set-up for experimental study of the creep characteristics of a material, available in wire form * Understand the basics of Indentation Creep Plastometry Before you start The following TLPs are relevant and could be consulted before you start: Introduction When a material is subjected to a stress that reaches the yield stress, it deforms ***plastically*** (permanently). Provided the stress is kept below this level, then in principle it should only deform elastically. However, if the is relatively high (above ~0.4) then in practice plastic deformation can occur, even if the applied stress is lower than the yield stress. This deformation is usually progressive with time and is commonly known as ***creep***.  During loading under a constant stress, the strain tends to vary with time approximately as shown below, where the effect of changing the applied stress is also indicated. The graph below is a plot of *creep strain*.  The elastic strain has been omitted.  In practice, it is quite common to do this (for plasticity, as well as for creep).  Its worth noting that elastic strains rarely exceed a small fraction of a %, at least for metals, whereas both plastic strains and creep strains commonly reach the range of several tens of %. ![Schematic creep curves for 3 different levels of applied stress](images/intro.jpg)  Schematic creep curves, for 3 different levels of applied stress (σ1 > σ2 > σ3) The terms “***Primary***”, “***Secondary***” and “***Tertiary***” creep are widely used.  At a simple level, they are often associated respectively with the concepts of: (i) setting up some kind of mechanistic balance, (ii) steady state (constant strain rate) deformation occurring once this balance has been set up and (iii) the breakdown of this balance, often with defects starting to appear and failure rapidly following. In reality, even without going into details of the mechanisms involved, this picture is simplistic and potentially misleading.  For example, the transition between primary and secondary regimes is often poorly-defined and indeed a true steady state may never be set up.  Furthermore, creep tests are commonly carried out with a constant applied load (rather than a ). Thus, for tensile tests, the tertiary regime may actually be a consequence of the fact that the true stress is rising throughout the test, with the rate of increase becoming greater towards the end of the test.  At least in some cases, the tertiary regime may be associated with the true stress starting to approach or exceed the yield stress of the material.  The situation can be further complicated by the possibility that significant microstructural changes (such as recrystallization), which could strongly affect the mechanical response, may occur during the test. These issues are examined in this TLP, together with some details concerning creep mechanisms and the ways in which creep testing can be carried out, and the resultant experimental data interpreted. Creep Mechanisms The detailed mechanisms responsible for creep tend to be complex.  However, they almost always involve diffusion of some sort.  This is how the time-dependence arises, since diffusional processes are progressive with time.  Further information about the fundamentals of diffusion is available in the . Creep deformation is caused by the deviatoric (shape changing) component of the stress state applied to a sample – the ***von Mises stress***. The hydrostatic component has no effect, meaning that during creep deformation occurs at ***constant volume***. However, whilst the overall deformation is in response to the deviatoric stress, local variations in in hydrostatic stress do affect creep behaviour, as will be outlined below. Coble Creep - Creep deformation often involves various defects, particularly ***dislocation cores*** or ***grain boundaries***.  These may simply act as ***fast diffusion paths***, or play a larger role in creep mechanisms (some of which are beyond the scope of this TLP), depending on factors such as dislocation density, grain size, grain shape and temperature.   The shape change experienced by the sample may arise simply from atoms becoming redistributed by diffusion.  When this occurs on the scale of a grain, with the diffusion occurring mainly via grain boundaries, then this is commonly referred to as ***Coble Creep***.  The simulation below shows how this tends to cause the sample to extend under an applied load. *Note: Coloured atoms are no different from those in the bulk, but merely allow each atom to be identified easily.*Simulation demonstrating Coble creep on the atomic scale It can be seen that raising the applied stress accelerates the rate of deformation.  The driving force for this net migration of material (from the “***equatorial***” regions of grains to the “***polar***” regions) is that an applied tensile stress like this creates hydrostatic compression in the equatorial regions and hydrostatic tension in the polar regions.  The hydrostatic tension can be thought of as arising from the applied tensile stress, where the compression arises from the lateral contraction of the sample (due to volume conservation). The atoms then tend to move from the more “crowded” to the more “open” regions.  The diffusive flux can be considered as a migration of vacancies in the opposite direction to the motion of atoms, although the concept of vacant sites is less well-defined in a grain boundary than in the lattice.  We could equally imagine how hydrostatic compression could arise in the ***“polar”*** regions, and hydrostatic tension could arise in the ***“equatorial”*** regions, via the application of a compressive stress to the sample. Hence diffusive flux would be in the opposite direction, and the sample would deform in the opposite fashion, with the grains becoming “squashed” by the compressive stress. Its also clear that raising the temperature increases the creep rate.  This is simply due to the rates of diffusion becoming higher as a consequence of the Arrhenius dependence (click for details).  The activation energy for grain boundary diffusion is low, and the cross-sectional area available for diffusion along grain boundaries is much less than for diffusion through the bulk. Therefore this type of creep is often the dominant one at ***relatively low temperatures*** and for samples with a ***fine grain size***. Nabarro-Herring Creep - A similar type of creep deformation to that described above can occur with the diffusion being predominantly within the interior (crystal lattice) of the grains, rather than in the grain boundaries.  This is often termed ***Nabarro-Herring creep*** (N-H creep).  It is depicted in the simulation below.Simulation demonstrating Nabarro-Herring creep on the atomic scale A similar dependence on temperature and stress is observed to that for Coble creep.  The diffusion of atoms in one direction can be more easily pictured as the diffusion of vacancies in the other direction during N-H creep. There is a considerably greater sectional area available via crystal lattice paths, particularly if the grains are relatively large.  On the other hand, the activation energy is higher, so diffusion rates tend to be low, particularly at low temperature.  This type of creep tends to dominate over Coble creep at ***relatively high temperature,*** and with ***large grains or single crystals***. Dislocation Creep - Purely diffusional creep (Coble and Nabarro-Herring) is fairly simple, and does occur under certain conditions  -  usually with relatively low applied stresses.  With higher stresses, it is common for a type of creep to occur that involves motion of dislocations, particularly in metals, where dislocation densities tend to be high.  Provided the stress is below the yield stress, conventional macroscopic plasticity, occurring predominantly via dislocation glide, should not occur.  However, with stresses that are starting to approach the yield stress, and are maintained for extended periods, progressive dislocation motion, and hence macroscopic plastic deformation, can occur, often facilitated by extensive ***climb*** (absorption or emission of vacancies at the core) of individual dislocations. It should be noted that climb does not refer only to vertical motion of the dislocation, and can refer to horizontal motion too.  One of the ways dislocation creep can occur via climb is shown in the simulation below: Simulation demonstrating Dislocation Creep on the atomic scale In this example, the shear stress provides the driving force for diffusion into the dislocation core, rather than hydrostatic compression or tension as in the case of Coble or N-H creep. In detail, there are several different ways in which combinations of dislocation glide and diffusion in the vicinity of dislocations can promote creep.  Some of these have been given specific names, but these often relate to observed dependences on the main variables (e.g. ***temperature*** or ***grain size***), rather than being clear about the precise mechanisms involved.  In general, they all involve some combination of dislocation climb and glide, although, in particular cases, factors such as the presence of ***obstacles*** (e.g. fine precipitates), the ease of ***cross-slip*** etc. may affect the observed behaviour. ***Dislocation density*** may also affect the behaviour. However, at the high homologous temperatures at which creep typically occurs, the dislocation density may decrease somewhat with time, or could indeed drop sharply if recrystallization were to occur. It might be imagined that this would reduce the rate of deformation. However, in practice the associated decrease in the yield stress might well promote the onset of conventional plasticity - ie extensive dislocation glide - such that the rate of deformation increased. Constitutive Laws for Creep = Effects of Microstructure on Creep As with plastic deformation, creep is a complex process that is strongly affected by the microstructure of the material.  (Some of the microstructural effects that influence plasticity are summarised in the .)  As with plasticity, however, ***guidelines*** can be identified concerning features likely to affect (inhibit) creep, and some of these are similar for the two.  For example, a ***fine array of precipitates***, which will inhibit dislocation glide and hence raise the yield stress, is also likely to inhibit (dislocation) creep.  However, there are limits to such linkage.  For example, precipitates might ***dissolve*** at the high temperatures involved in creep.  More fundamentally, some features can affect creep and plasticity quite differently.  For example, while a fine grain size tends to raise the yield stress, as a result of ***grain boundaries*** acting as ***obstacles*** to dislocation glide, it can cause accelerated creep in a diffusion-dominated regime, since such boundaries also constitute ***fast diffusion paths***. Nevertheless, as with plasticity, empirical ***constitutive laws*** can be used to model and predict creep behaviour.  There is sometimes scope for interpreting the ***values of parameters*** in these laws in terms of the dominant ***mechanisms*** involved.  In particular, if rates of creep are measured over a range of temperature, then it may be possible to evaluate the ***activation energy***, Q (in an Arrhenius expression), which could in turn provide information about the type of diffusional process that is rate-determining.  It is also often claimed that the value of a ***stress exponent***, *n*, obtained via creep rate measurements over a range of applied stress, is indicative of the dominant mechanism, with a low value (~1-2) indicating ***pure diffusional creep*** and a higher value (~3-6) suggestive of ***dislocation creep***.  The theoretical basis for such conclusions may sometimes be questioned, and in any event these laws must be recognised as essentially empirical, but it is certainly important to be able to characterize the creep response of a material, and to be clear about the regime of temperature and stress for which a particular law is valid. General (Steady State) Creep Law The creep strain rate (rate of change of the von Mises plastic strain) in the steady state (Stage II) regime is often written \[\dot \varepsilon = A \sigma^n \exp ( - Q/RT)\;\;\;\;\;\;\;\;(1)\] where A is a constant, σ is the applied (von Mises  -  click for definition) stress, Q is the ***activation energy*** and *n* is the ***stress exponent***.  This is a relatively simple equation, but several caveats should be added.  The most important of these are apparent in Fig.1 of the Introduction page   -  ie it relates only to the steady state (secondary) regime.  It is sometimes stated that the overall creep life is often ***dominated*** by this regime.  In practice, this may or may not be true.  In particular, its worth noting that, not only can the primary regime extend over a significant fraction of the creep lifetime, but also, since the creep rate is often much higher during primary creep, its contribution to the overall creep strain can be substantial, and even dominant.  Depending on a number of factors, simply ignoring primary creep may be highly inappropriate. ![self diffusion Ea v high temperature creep Ea](images/Activation energies graph.png) Plot of self-diffusion activation energy vs. high temperature creep activation energy, with a line drawn along which the two are equal. [1] It may be noted that the activation energy for creep at high temperatures is often found to agree closely with that for bulk diffusion  -  see, for example, the data in the figure.  This is consistent with the concept of N-H creep (diffusion through the lattice) dominating Coble Creep (grain boundary diffusion) at high temperatures. Laws Capturing both Primary and Secondary Regimes - In view of the potential importance of the primary regime, there is strong interest in using modelling approaches that incorporate it.   Several expressions have been proposed for capture of both primary and secondary regimes, and of the transition between them.  One that can be taken as representative is the following equation, which is sometimes termed the ***Miller-Norton law*** \[{\varepsilon \_{{\rm{cr}}}} = \frac{{C{\sigma ^n}{t^{m + 1}}}}{{m + 1}}\exp \left( {\frac{{ - Q}}{{RT}}} \right) \;\;\;\;\;\;\;\;(2) \] In this expression, *C* is a constant (units of Pa−n s−(m+1)), t is the time (s), n is the stress exponent and m is a dimensionless constant.  The simulation below, in which this equation is plotted, can be used to explore Miller-Norton creep strain plots as the 6 parameters involved are varied.Simulation showing behaviour predicted by M-N law with constant true stress A number of features should quickly become apparent, such as the high sensitivity to temperature and that the sensitivity to the applied stress increases as the value of n is raised.  One issue here is whether the applied stress is a nominal or a true value.  It certainly should be a true value, as this is implicit in the M-N law.  However, it is common during testing to fix the load (often in the form of a ***dead weight***), rather than the true stress.  Also, most uniaxial creep tests tend to be carried out in ***tension***.  Neglecting any inhomogeneity that might arise, such as a necking effect - which is not common during creep testing - the true stress will rise as straining occurs and the cross-sectional area reduces.  Depending on the value of *n*, this could have the effect of causing the strain rate to rise (whereas it would otherwise be falling and approaching a constant value). This effect is modelled in the simulation below.  The M-N law can be differentiated with respect to time, to give \[{\dot \varepsilon \_{{\rm{cr}}}} = C{\sigma ^n}{t^m}\exp \left( {\frac{{ - Q}}{{RT}}} \right) \;\;\;\;\;\;\;\;(3)\] Therefore, by stepping in time and repeatedly re-evaluating the true stress, and hence the strain rate, the full creep strain curve can be built up (although it can no longer be expressed as a single analytical equation).  In order to implement this, the relationship between true and nominal stresses is needed: \[{\sigma \_T } = \frac{F}{A} = \frac{{FL}}{{{A\_0}{L\_0}}} = \frac{{F({L\_0} + {\varepsilon \_N}{L\_0})}}{{{A\_0}{L\_0}}} = \frac{F}{{{A\_0}}}(1 + {\varepsilon \_N}) = {\sigma \_N}(1 + {\varepsilon \_N})\;\;\;\;\;\;\;(4)\] and also that between true and nominal strains \[{\varepsilon \_T } = \int\_{{L\_0}}^L {\frac{{{\rm{d}}L}}{L}} = \ln \left( {\frac{L}{{{L\_0}}}} \right) = \ln (1 + {\varepsilon \_N}) \;\;\;\;\;\;\;(5) \] The above plot can therefore be modified using Eqns. (3), (4) and (5), with the nominal stress taken as constant.  This is done by stepping forward in small increments of time (ie this is a numerical procedure, rather than just the plotting of an analytical equation).  The sequence is as follows.  After an initial small time increment, true stress and true strain are taken to be equal to the nominal values.  For the next increment, the strain rate is obtained using Eqn.(3), and hence the increment of (true) strain obtained on multiplying this by the time increment.  In order to use Eqn.(3), the true stress is needed.  This is obtained using Eqn.(4), with the nominal strain obtained from the true strain using Eqn.(5).  This operation is repeated after every time step, with the true stress progressively rising. In the simulation below, the stress selected is a nominal value. Depending on several factors (particularly the value of n), a significant rise in the creep strain rate can be seen. This is sometimes (mis-)interpreted as a “Tertiary” regime, although a similar increase could also arise as a result of microstructural changes (e.g. ***cavitation***).  In the case of uniaxial compressive testing, the strain rate will tend to fall, due to a drop in true stress, and hence no such regime will be seen. Of course, these effects will only be noticeable at relatively high strains (say, >~10%), although in practice these are common during uniaxial creep testing.This figure compares the (tensile) creep strain plot of the standard M-N equation, using a constant nominal stress, with that obtained by stepping through time, repeatedly evaluating the true stress and then using that in the differential form of the equation It should also be noted that this analysis is based on the Miller-Norton equation being valid, over the range of stresses being considered.  In fact, if the stress rises significantly, then this may not be the case.  In particular, if the true stress starts to reach the yield stress, then conventional plasticity may be stimulated, which would probably be apparent in a creep strain plot as a sharply increasing strain rate.  In fact, this may be responsible for much observed “Tertiary Creep”, rather than it being an effect that can be fully captured by applying the Miller-Norton equation while taking account of the changing true stress. [1] https://www.intechopen.com/books/superalloys/phase-equilibrium-evolution-in-single-crystal-ni-based-superalloys   Uniaxial Creep Testing - Practical Basics = Many of the practical issues outlined in the (for Plasticity) apply equally to Creep Testing.  For tensile testing, the gauge length must have a smaller sectional area than the region that is gripped, such that the latter undergoes only elastic deformation.  This is not the case for compressive testing, where the sample usually has a uniform section along its length, and is located between hard platens.  For Creep Testing, however, there are additional challenges.  For example, it is commonly carried out at high temperature, so a furnace (with good thermal stability) is needed, and all of the sample must be held at the selected temperature.  Also, the load must be sustained for long periods  -  perhaps just a few hours, but in some cases periods of many weeks or even many months might be needed.  Such conditions bring slightly different challenges from those of conventional (stress-strain) testing. Nominal or True Levels of Stress and Strain? As for conventional testing, there is the issue of true or nominal versions of both stresses and strains.  However, there is a difference with Creep Testing.  For Stress-Strain Testing, the applied load is being ramped up continuously, so the issue reduces to that of converting any particular load level to a stress in the sample.  Provided the stress is uniform throughout the sample, a simple equation can be used to convert an applied load to a true stress.  A similar operation allows an extension (nominal strain) to be converted to a true strain.   There is a rather more fundamental problem with Creep Testing.  As noted in the Introduction page of this TLP, creep is characterised by a series of strain-time curves for different levels of (true) stress.  Under fixed load conditions, the true stress changes continuously during the test, so these curves will be unreliable (for tests in which relatively large strains are generated). Fixed Load Machines - Many creep facilities are based on a fixed load, applied via a dead weight (with a lever arrangement such that the actual applied force is considerably larger than the weight itself).  A typical facility of this type is shown below.  The actual weight used is unlikely to be much more than a few tens of kg.  However, with the lever arm arrangement giving a mechanical advantage of at least 10, and perhaps considerably more, such weights can produce an applied force on the sample of up to several tens of kN.  This is usually sufficient for most situations (sample dimensions and required stress levels), although the limitations associated with having a fixed nominal stress (varying true stress) should be noted.  ![dead weight creep frame](images/dead_weight_creep_frame.jpg) A typical (dead weight) creep frame (produced by Medilab Enterprises) Variable Load Machines It is, of course, possible to have a loading frame in which the force (usually produced either by screw-driven or hydraulic systems) can be varied during the test.   A typical facility of this type is shown below.  Such machines can readily generate forces of up to hundreds of kN.  With the strain (extension) being continuously monitored, software control can be used to change the applied load such that a true stress is maintained. However, it is worth noting that such machines tend to be expensive.  Theyre not well-suited to very long term tests, both because of potential wear on the loading system and because tying up such expensive facilities for long periods is not economically attractive.  If a number of samples are to be tested over periods of weeks or months, which is not unusual for creep testing, then it is much more likely that a set of dead weight machines will be used. ![variable load creep loading frame](images/variable_loading_frame.jpg) A typical (variable load) creep loading frame (produced by IBERTEST)   Multiaxial Creep Testing – The Creeping Coil Background *Note: this experiment deals with torsion which can be explored further in the* This experiment is a simple and convenient one to carry out.  The sample is in the form of a wire that is wound into a coil, which creeps under its own weight.  Using a low melting point material such as lead or solder, this occurs at relatively low temperatures (without heating, or using a simple hot air system).  Several simplifying assumptions are made in carrying out the analysis.  Each turn of the coil is taken to experience a constant stress  -  the average value acting throughout the section of the wire.  Furthermore, it is assumed that the steady state regime of behaviour is immediately established everywhere  -  ie primary creep is neglected.  These are crude assumptions and the experiment cannot be used to obtain quantitative creep characteristics.  However, it is possible to obtain fairly reliable indications of the dependence of creep rates on temperature (ie values of Q) and, to some extent, on stress (ie values of n) ![Diagram of a coiled sample](images/img004.jpg) Diagram showing the coil with each turn assigned a number starting from N=0 when the stress is zero The stress in a particular turn is proportional to its number, N, where the turns are numbered beginning from zero at the bottom turn and ending at the top. The shear stress in each turn varies from zero at the centre of the turning to a maximum value at the edge of the coil and its average, τ, is given by: \[\tau = BN\] () where B is a constant of proportionality (for this experiment). The coil is then allowed to creep over a fixed amount of time (e.g. one minute) and at the end of this time the spacings, s, between the turns are measured.The average local shear strain γ in each turn is a function of s and is given by\[\gamma = Cs\] () where C is a constant. Assuming that steady state creep dominates, the strain rate is simply the strain divided by the time.  This equation can now be used to explore the relationship between strain rate and shear stress \[\dot \gamma = C\frac{s}{t}\] Using the steady state creep equation: \[\dot \gamma = A{\tau ^n}exp\left( { - \frac{Q}{{RT}}} \right)\] Taking natural logarithms: \[\ln \dot \gamma = \ln (A) + n\ln (\tau ) - \frac{Q}{{RT}}\]  using the expressions for average shear stress and average strain rate: \[ \ln \left( {\frac{s}{t}} \right) =K + n\ln (N) - \frac{Q}{{RT}}\;\;\;\;\;\;\; (\*) \]   where  \(K =\ ln(A) + n\ln(B) - \ln(C)\). This equation can be used to evaluate the creep parameters from (plots of) experimental data. Experimental Set-Up - | | | | - | - | | Your browser does not support the video tag. Creep deformation of a coil of solder at 28oC | Your browser does not support the video tag. Creep deformation of a coil of solder at 85o | In order to create a temperature-controlled environment, the coil is placed inside a Perspex tube. Once the temperature in the tube has stabilised, the coil is allowed to creep under its own weight for one minute. The videos above show qualitatively how creep rate varies with temperature.Evaluating Parameters - The spacings between the turns of the coil can be obtained from photos or more simply by just rotating the support cylinder until it is horizontal (so that creep will stop).  Spacings are then readily measured directly with a ruler. The spacing of the Nth turn is taken as the distance between the Nth and (N-1)th turn, i.e. the spacing of the 1st turn is the distance between the 0th and 1st turn.  Using the equation above marked with an asterisk, a plot can be constructed of ln(s/t against ln N), at a given temperature, with the stress exponent, n, obtainable from the gradient. A similar operation can be carried out using data for a number of different temperatures, with ln(s/t) plotted against 1/T, for a selected value of N, to obtain the activation energy, Q Multiaxial Creep Testing – Indentation Creep Plastometry table { border: 1px solid black; border-collapse: collapse; } td { border: 1px solid black; text-align: center; } Indentation creep plastometry is an fairly novel procedure involving use of an ***indenter***, as opposed to conventional uniaxial testing machines.  The process has similarities to indentation plastometry, described in the .  The advantages noted there, arising from the procedure being non-destructive, applicable to small samples of simple shape and offering scope for mapping of properties across a surface, also apply here.  Furthermore, while uniaxial creep testing requires a series of tests at different stress levels (and experimental difficulties in maintaining a constant true stress), a single indentation creep experiment allows full characterisation of the creep behaviour (as captured in a constitutive law). Method ### Recess Creation A spherical indenter is normally used.  As with conventional creep testing, it is important to avoid ***conventional (time independent) plastic deformation***.  This is particularly an issue in the initial stages, when the contact area would be very low as the indenter penetrated a flat surface, and the stresses correspondingly high.  In order to avoid this problem, a (spherical) recess is first created in the sample, matching the indenter.  This procedure can be used to ensure that the stresses in the sample never reach the yield stress (at the temperature concerned). ### Test Procedure The selected load is quickly applied and the penetration recorded as a function of time.  As with Indentation Plastometry, the core of the procedure is iterative numerical simulation of the test, using the method. A constitutive creep relationship, such as the M-N law, is used in the model, with trial, and then improved, values of the (three) parameters in it, with the goodness of fit parameter (S) between predicted and measured displacement-time curves being used to guide the optimisation.  The simulation below shows the outcome of this operation for a particular case (a Ni sample at 750°C, with a 1 kN load, an indenter radius of 2 mm and an initial recess depth of 1 mm, with a 2 mm radius of curvature).  The evolving von Mises stress and strain fields are shown, together with measured and modelled displacement-time plots, with the optimised M-N parameter setPredicted and mesured curves agree quite well  -  the value of *S* is 10-3.2, which is acceptably close to zero.  The parameter set that gave this level of agreement is shown below:| | | | | - | - | - | | Cexp(-Q/RT) / Pa-n s-(m+1) | n | m | | 4.3 10-8 | 2.46 | -0.65 |![Graph showing how our predictions compare with the experimental data for uniaxial tensile creep testing](images/Predicted_tensile_curves.jpg) Graph showing how our predictions compare with the experimental data for uniaxial tensile creep testing Using this parameter set, a prediction can be made (using the M-N law directly, with no need for FEM simulation), of the outcome of uniaxial testing, with any particular applied stress level. The outcome of such operations is compared with experimental data, for this material and test temperature, in the plot shown here. Two sets of predicted plots are shown for each stress level. The first set corresponds to use of Eqn.(2) on the page "Constitutive Laws for Creep" - ie the standard M-N law. In fact, the experimental data were obtained during testing with a fixed load (constant nominal stress), so the appropriate plots are really those marked in the figure as referring to Eqn.(3), which were obtained using the differential form of the M-N law - ie Eqn.(3) on the page "Constitutive Laws for Creep". It can be seen that the agreement with these plots is good, apart from periods towards the end of the test with the higher applied stress levels, when tertiary” behaviour is seen, probably because the true stress was approaching the yield stress (~65 MPa), so that “plasticity” was stimulated. Designing for Creep Resistance - Nickel Based Superalloys = With over 100,000 flights a day, jet engines are commonplace in todays world. Jet engines work more efficiently at higher temperatures.  Turbine entry temperatures can be of the order of 1,500°C and, while cooling channels and Thermal Barrier Coatings ensure that the blades do not reach these temperatures, they may be at around 1,000°C -  a temperature at which most metals are likely to undergo rapid creep under the stresses concerned (~100 MPa). The gradual deformation of these blades is not only financially costly, as they will have to be replaced, but could lead to catastrophic failure and hence, in some cases, loss of life. This is why its important to understand the creep behaviour of the materials we use and moreover how we can produce materials with improved creep resistance. Diffusion and dislocation motion are the main mechanisms by which the creep of materials occurs and as such limiting them is key to designing creep resistant materials. Jet engine blades are made from Nickel-based superalloys.  A brief outline is presented here of how they are designed, given that resistance to creep at very high temperatures is a key requirement. Limiting Diffusion ### 1.Homologous temperature The (substitutional) ***diffusivity*** of a material depends on how easily atoms move between vacant sites in the lattice (), and on the vacancy concentration.  Both jump rates and vacancy concentrations are higher at higher temperatures, approaching limiting values close to the melting temperature.  It is thus the ***homologous temperature*** (T/Tm) that is important.  For example, diffusion (and hence creep) rates are relatively high at room temperature for Pb (T/Tm ~0.5), but negligible for Ni (T/Tm <0.2). There are clearly advantages in using materials with high melting temperatures.” ### 2.Grain structure There are two important points to be made regarding grains. Firstly, grain boundaries themselves are detrimental to creep resistance because they provide ***fast diffusion paths*** through the material and thus accelerate (Coble) creep.  Secondly, the length of a grain in the direction of applied stress dictates the distance an atom must diffuse in order to reach the “polar” regions from the “equatorial” regions. Elongating the grains parallel to the applied stress direction will thus tend to reduce the creep rate.  The latter effect can be obtained with a columnar grain structure, which is relatively easy to create using controlled directional solidification.  Even better is a ***single crystal*** (with a selected crystallographic orientation).  This requires somewhat greater control during the casting process, although it can be done and this is now routine for many types of blade.  The figure below illustrates this. ![Progression in manufacture of creep-resistant microstructure over time](images/turbine blades.png) Progression in manufacture of creep-resistant microstructure over time ### 3.Crystal Structure In addition to the homologous temperature, there are other factors that can affect the diffusivity.  These include the crystal structure, which dictates both the packing density and the nature of the paths taken during atomic jumps between vacant sites.  In general, diffusive jumps are more difficult in close-packed, high-symmetry structures, such as ***fcc***.  There are also additional factors related to crystal structure.  For example, diffusion is difficult in ***ordered structures***, since diffusion tends to create disorder (that is thermodynamically opposed).  The microstructure of Ni-based superalloys is typically a 2-phase one, composed of an intimate mixture of γ (fcc Ni) and γ (an ordered structure based on Ni with Al and/or Ti), with coherent or semi-coherent interfaces between them.  These structures are illustrated below. Limiting Dislocation Motion - While the two are often inter-related during creep, it is important to inhibit dislocation motion, as well as diffusion.  This happens in several ways in Ni-superalloys.  One of these concerns the ordered γ phase.  An ordered structure is degraded, not only by diffusion, but also by dislocation glide.  Such motion is therefore opposed, an effect often described as ***order hardening***.  (In fact, if dislocations move in pairs, sometimes termed superdislocation pairs, then the second of the pair may restore the order, but this does impose a constraint on the motion  -  see below.)  There is also an effect of the coherency of the γ /γ interface, which creates lattice strain in the vicinity.  This also tends to inhibit dislocation glide. ![phase structures](images/unit cells.png) Unit cell of γ on the left and that of γ on the right [2] Precipitates of γ exist within a γ matrix, and so dislocations will have to pass from γ to γ in order to move through the alloy. The shortest ***Burgers vector*** in γ is (a/2)<110>, but in γ this is not a ***lattice vector***. As a dislocation passes from γ to γ its Burgers vector must be maintained and it will create a region with unfavourable bonding (an ***anti-phase boundary***) and hence an associated energy cost. Its motion is therefore slowed considerably in γ compared to that in γ, and it can only travel at any appreciable rate through γ once another dislocation passes into γ which restores the unfavourable bonding back to its ordered state – as they move through together the anti-phase boundary maintains a constant width and there is thus no added cost to their motion through γ. This can be seen in the micrograph below: ![microgrpah of dislocation at gamma-gamma prime](images/dislocation networks.png) Micrograph showing dislocations passing through a Ni superalloy [3] It can be seen that dislocations D1 are impeded by γ precipitates with D2 following unimpeded. This mechanism results in much slower dislocation glide through the alloy as a whole and so is effective in reducing the rate of dislocation creep. The finer the dispersion of γ precipitates the more effective the strengthening will be. The two dislocations that pass through γ together are called ***super-partial*** dislocations and the sum of their burgers vectors is a lattice vector - a<110> - in the γ phase. It is important to note that the γ - γ interface is coherent and therefore coarsening does not occur over time. [2]  Y. M. Wang-Koh (2017) *Understanding the yield behaviour of L12-ordered alloys,* Materials Science and Technology, 33:8, 934-943, DOI: 10.1080/02670836.2016.1215961 [3] Lv, X., Sun, F., Tong, J. et al. J. of Materi Eng and Perform (2015) 24: 143. https://doi.org/10.1007/s11665-014-1307-y Summary = This TLP covers the mechanisms involved in creep deformation, how it can be modelled and how it can be investigated experimentally. There is also coverage of the need for creep-resistant materials and how they can be designed. Firstly, the fundamental mechanisms of creep are covered, including the dependence of each on levels of stress and (homologous) temperature.  The key role of diffusion is emphasized, leading to a dependence on time that is not exhibited by conventional plastic deformation. Secondly, the concept of (empirical) constitutive laws to characterize creep was introduced.  The emphasis is often on Stage II (steady state) creep, but in practice the primary regime often constitutes an important part of the overall behaviour and a law is presented that covers both regimes.  Mention is also made of the so-called tertiary regime (immediately before final rupture), and the possible ways in which it can arise. Thirdly, there are descriptions of the experimental procedures that can be used to obtain creep characteristics.  The most common of these are the conventional uniaxial (tensile or compressive) tests, which need to be carried out with a series of different applied stresses in order to obtain the values of parameters in constitutive laws.  These procedures are rather cumbersome and time consuming.  It is also shown that test procedures exist in which the stress and strain fields are more complex, but can be analysed – either using a set of equations or via Finite Element Modelling.  The creeping coil experiment and the Indentation Creep Plastometry procedures are described as examples of these, the latter having the potential to at least partly replace conventional creep testing. Finally, the importance of creep resistance in technological applications was illustrated using the well-known example of Ni-based superalloys in turbine blades for aero-engines and power generation plants.  The microstructure of these components, and the ways in which their design and production have been approached, are related to the optimisation of creep resistance, based on various principles outlined earlier in the TLP. Questions = ### Quick questions*You should be able to answer these questions without too much difficulty after studying this TLP. If not, then you should go through it again!*1. Which of the following could be reasons tertiary creep is observed? | | | | | - | - | - | | | a | Rising true stress can cause the strain rate to rise | | | b | Microstructural damage such as cavitation can occur | | | c | Rising true stress can exceed the yield stress, causing conventional plastic deformation | | | d | All of these" | 2. In the γ/γ' structure of Ni-based superalloys, why is the existence of a coherent interface important? | | | | | - | - | - | | | a | It leads to lattice strains that can inhibit dislocation motion | | | b | The ordered structure of the γ' phase inhibits diffusion | | | c | The ordered structure of the γ' phase inhibits dislocation motion | | | d | All of these | 3. What is the function of the recess in Indentation Creep Plastometry? | | | | | - | - | - | | | a | It prevents tertiary creep | | | b | It allows the stresses created during the early stages of the process to be kept relatively low (below the yield stress) | | | c | It ensures that the indenter is located properly before indentation starts | | | d | It removes the near-surface region, which could have different properties from the bulk | | | e | It eliminates primary creep, so that the steady state regime is immediately established |### Deeper questions*The following questions require some thought and reaching the answer may require you to think beyond the contents of this TLP.*4. Why is creep deformation different from conventional deformation? | | | | | - | - | - | | | a | Strain develops at a much slower rate in the material | | | b | Strain produced depends on the time for which the stress is applied | | | c | Creep cannot occur at room temperature | | | d | The strain produced doesnt depend on the applied stress | 5. Which of the following statements is true? | | | | | - | - | - | | | a | Only Coble creep is controlled by diffusion | | | b | Only Nabarro-Herring creep is controlled by diffusion | | | c | Only dislocation creep is controlled by diffusion | | | d | All of the above mechanisms are controlled by diffusion | 6. Which of the following is not an assumption that we make during the creeping coil experiment? | | | | | - | - | - | | | a | Assume constant true stress when we have constant load | | | b | Assume that the stress is uniform throughout each turn | | | c | Assume that only steady state creep occurs throughout | | | d | Assume that the creep regime is the same throughout the coil | 7. Which of the following features of Ni superalloy turbine blades helps to make them resistant to creep? (There may be more than one) | | | | | | - | - | - | - | | Yes | No | a | Fine grain structure | | Yes | No | b | Single crystal | | Yes | No | c | High dislocation density | | Yes | No | d | High purity | | Yes | No | e | Order hardening due to γ' precipitates in γ matrix | | Yes | No | f | Low inherent diffusivity of FCC structure | Going further = There are many publications covering creep, over a wide range of depths. Some go into much greater detail than this TLP. The following books provide a good overview: **Books** *Fundamentals of Creep in Metals and Alloys,* Michael Kassner, Butterworth-Heinemann, 2015, ISBN: 9780080994277 *Creep of Metals and Alloys*, RW Evans & B Wilshire, CRC Press, 1985, ISBN-10: 0904357597" Regarding Indentation Creep Plastometry, which is a very recent development, there are as yet no published books and indeed the software necessary to implement the technology is not yet widely available in user-friendly, commercially mature form.  However, there are websites that describe the methodology, where such access is likely to become available in due course.  Notable among these is .
Aims On completion of this TLP package, you should: * understand the concept of crystallographic texture; * be aware of different methods of measuring and representing texture; * be aware of the effects of texture; * be able to identify specific textures from pole figures and crystal orientation distribution functions. Before you start You should be familiar with the use of stereograms to plot poles of crystals and have a good knowledge of atomic structure and be able to index planes with Miller indices. You should also be familiar with mechanisms of slip in single crystals. It would be helpful to read the following TLPs: , , , , and . Introduction **What is crystallographic texture?** In a polycrystalline material, the crystallographic axes of the grains can be oriented randomly with respect to each other, or they can be oriented so that there is a non-random distribution. If there is a preferred orientation, then we say that the material has *crystallographic texture*. Measurement of Texture In the past, optical methods and etching have been used to determine grain orientation, but recently texture is almost exclusively measured by diffraction techniques. Diffraction of x-rays, electrons, and neutrons will all be discussed in this TLP. X-ray diffraction - The most common method of measuring texture uses x-ray diffraction and is known as the “Schultz reflection method”. The apparatus used is known as a four-angle diffractometer or a *Eulerian cradle.* The source of x-rays and the detector are oriented so that a particular value of 2θ is specified. This allows for a single Bragg reflection to be measured. The stage of the cradle is tilted and rotated systematically, so that all angular orientations of the sample are investigated. The animation below shows how the Eulerian cradle is used for measurement of texture using the reflection method. When the specified lattice plane of a crystallite fulfils the Bragg condition, the detector will record the reflection. For a polycrystalline material, the intensity of detected x-rays will increase when there are more crystallites in a specific orientation. The intensity for a given orientation is proportional to the volume fraction of crystallites with that orientation. Areas of high and low intensity suggest a preferred orientation, while constant intensity at all angles would occur in a random polycrystalline aggregate. X-ray diffraction may be carried out so that the x-rays are reflected from the surface of the sample, or they may penetrate via transmission. Transmission is only suitable for thin films or wires because of the high absorption of the x-rays by many materials. In some materials the bulk and surface textures may be different, e.g. in some rolled textures. Therefore, it is important to identify which texture is of interest. Different sources of radiation can lead to different degrees of penetration, and hence allow the measurement of either bulk or surface textures. Measurement of Texture 2 Electron Backscatter Diffraction (EBSD) - An alternative method of texture determination is that of electron backscattered microscopy using a scanning electron microscope. Within a single grain, the electron beam is fixed at a point on the surface. At particular angles the beam is diffracted, so that there is a change in the intensity of the reflection measured. This leads to the formation of a *backscatter Kikuchi pattern* made up of Kikuchi lines. ![Boron doped 001 Si Wafer with yellow bands](images/ebsd.jpg) Boron doped (001) Si wafer. The blue cross is at 90° to the incident electron beam. The wafer is oriented so that [001] is normal to the stage. The stage is held at an angle of 70°, so that the blue cross is 20° from the stage normal. Poles are determined and plotted by the commercial EBSD software using symmetry and vector addition considerations. Image courtesy of Dr Jeff Wheeler. The location and symmetry of the backscatter Kikuchi pattern allows specific bands of Kikuchi lines to be indexed unambiguously. The indexed patterns are used to describe the orientation of the grain within typically an experimental error of ±2°. The information stored by the commercial software using this method includes specimen coordinates as well as the crystal orientation. Therefore, it is possible to build up a two dimensional map of the orientation of grains on the surface of a polycrystalline material. In this image of recrystallised stainless steel, grains, annealing twins and grain boundaries are apparent. The orientation at each pixel in the image is represented by a different colour. Each coloured pixel is defined by a single EBSD measurement. The colours can be separated into their red, green, and blue constituents by fractal analysis. The orientation of the crystal can be determined from the key. Each grain orientation is described with reference to an external frame of reference. Here, the orthogonal axes of the external frame of reference with respect to which the grain orientations are defined are (i) the direction normal to the plane of the sheet, (ii) the direction parallel to the tensile axis, and (iii) the transverse direction parallel to the plane of the sheet. Thus, for example, there is a large grain coloured mostly red in the lower left-hand corner of the picture. Within this grain there are subtle changes in rotation giving rise to regions which are more orange than red in colour: this indicates the occurrence of low-angle sub-grain boundaries within this large grain. More significantly, there are grey, green and blue-grey regions abutting this red grain. Each of these three regions is twinned with respect to the red grain. Neutron diffraction - Neutron diffraction can be used in a similar way to x-ray diffraction. There is a large reduction in absorption but a much higher angular resolution in neutron diffraction in comparison with x-ray diffraction. In-situ texture changes due to environmental factors (e.g. temperature changes and stress) can be measured using neutron diffraction. Representing Texture Pole figures A pole figure is simply a stereogram with its axes defined by an external frame of reference with particular *hkl* poles plotted onto it from all of the crystallites in the polycrystal. Typically, the external frame is defined by the normal direction, the rolling direction, and the transverse direction in a sheet (ND, RD and TD respectively. Occasionally, CD meaning cross direction is used instead of TD.) The animation below shows the relationship between the orientation of the crystal and the stereographic projection obtained for the <100> poles. Drag an atom in the green sphere to reorientate the unit cell of the grain under consideration. This will alter the projections of the [100], [010] and [001] directions on the stereogram inside the rectangle. Press 'Add grain' to add the [100], [010] and [001] directions of another grain, up to a maximum of two additional grains. Try altering their orientations so that all three are similar and then different, and notice how the positions of the poles change. A pole figure for a polycrystalline aggregate, which shows completely random orientation, does not necessarily appear as might naively be expected. Angular distortions inherent in the stereographic projection result in the accumulation of points close to the centre of the pole figure as shown in the image below. ![Diagram shwing accumulation of points close to the centre of the pole figure](images/randompolefig.jpg) If the material shows a degree of texture, the resultant pole figure will show the accumulation of poles about specific directions. ![Diagram of 100 pole figure showing “cube” texture](images/cubepolefig_s.jpg) A 100 pole figure showing “cube” texture – the {100} poles of the crystallites are oriented so that they are aligned with the axes defined by the rolling, transverse, and normal directions. A single crystal can be plotted on the pole figure and there is no ambiguity regarding its orientation. However, as more crystallite poles are plotted onto the pole figure, the specific orientation of a particular crystallite can no longer be defined. For a large number of grains in a polycrystal, poles may overlap on the pole figure, so that the true orientation density is not clearly represented. In this case, contours tend to be used instead. Regions of high pole density have a high number of contours, while regions with low pole density have a few, greatly spaced contours. ![](images/polefigcube.jpg) 100 pole figure showing “cube” texture and pole density represented using contours rather than discrete points. Inverse pole figures Instead of plotting crystal orientations with respect to an external frame of reference, inverse pole figures can be produced which show the rolling, transverse, and normal directions (RD, TD and ND respectively) with respect to the crystallographic axes. Typically, these are plotted on a *standard stereographic triangle* as shown below ![RD, TD and ND plotted on a standard crystallographic triangle](images/inversepolefig.jpg) Representing Texture 2 Crystal Orientation Distribution Function (CODF) As previously mentioned, pole figures do not give information about the orientation of a particular crystal relative to another. More information can be gathered from a CODF. CODFs are constructed by combining the data from several pole figures. This requires intensive use of mathematics. More details can be found in the references listed in the section. CODFs describe the orientation of each crystal relative to three *Euler angles* (φ,ψ, and θ). The Euler angles define the difference in orientation between the crystal axes and the deformation axes (i.e. the RD, the ND and the TD). One convention for Euler angles (and the convention described here) is known as the Roe convention. An alternative convention can be used where the θ-rotation occurs about the x1 direction; this is known as the Bunge convention. These two conventions are related by: ψ*Roe* = ψ1,Bunge – π/2 θRoe = φBunge φRoe = ψ2, Bunge + π/2 A single crystal is completely described by a point in a cube with axes of φ, ψ and θ. This cube is referred to as *Euler space* and is often shown as a series of cross-sections. ![Diagram showing Euler space as a series of cross sections](images/codf 1.jpg) These sections of a CODF are for a steel sheet cold-rolled to an 80% reduction in thickness. Any values of *φ* between 0° and 90° can be used to produce the sections. In the image above, the values of φ that have been chosen are: 0°, 20°, 25°, 35°, 45°, and 55°*.* The contours of the sections make up part of a three-dimensional surface in Euler space as seen in the following animation. The highlighted contours on the sections correspond to the similarly coloured surface in the 3D plot below. The area of highest density and hence strongest texture is bound by the yellow surface centred around φ = 26.6°, ψ = 39.2°, and θ = 65.9°. *The original figure can be found in D.J. Goodwill Ph.D. thesis, University of Cambridge (1972) – The relationship between texture and properties of steel sheets.* Texture diagrams such as those produced by Davies, Goodwill, and Kallend (1971) (See ) can be used to identify the texture present. In this case it is (211)[011]. (211) describes a plane which is orientated parallel to the plane of the sheet, while [011] is a direction parallel to the rolling direction. ![Image of charts](images/CHARTS.jpg) Origins of Texture Texture can arise whenever there is a preferred crystallographic orientation within a polycrystalline material. Solidification Directional solidification leads to texture when columnar grains grow in a preferred crystallographic direction in the heat flow direction. Crystals with a fast growth direction parallel to that of the heat flow will dominate the final structure. For more details see: The preferred growth direction of dendrites in cubic metals is <100>. In the animation above it can be seen that close to the chilled mould texture is random, while at great distances from the mould, <100> fibre texture is prominent. This variation in texture occurs because nucleation at the mould wall is a random process with respect to orientation, whereas growth of the grains is strongly dependent on the orientation. Mechanical deformation The effect of slip during deformation is described in and in . Slip planes of individual crystals will rotate, so that the direction of slip will rotate towards the tensile axis. If the axis along which force is applied is constrained, so that it does not deviate from its original direction, then the crystal must rotate with respect to the axis. In a polycrystalline sample, an originally random distribution of crystal orientation will become non-random and tend towards an *ideal orientation*. The ideal orientation depends upon a number of factors including the crystal structure of the material, its temperature, the alloying additions, and the processing route. The effect of wire drawing, rolling, and annealing on the texture of c.c.p. and b.c.c. metals can be investigated in the flowing animation. Thin films Pronounced texture is very common in thin films. Even when there is no crystallographic matching between the crystal structure of the growing film and the substrate, the grains in the film show a strong preference for a particular plane to be parallel to the substrate surface. For example, films of c.c.p. metals prefer to have a close packed plane parallel to the surface of the substrate. Effect of Texture = Mechanical properties - The effects of anisotropy can either cause problems for the mechanical properties of metals, or they can be exploited to the benefit of the manufacturer. The phenomenon of “earing” in deep drawing, where a wavy edge forms on the top of a drawn cup, allows the effect of texture to be seen easily. Depending on the degree of preferred orientation, two, four or six ears will form and extensive trimming is required to produce a uniform top. However, in beverage can and automobile body manufacture, plastic instability is avoided by controlling the texture in the thickness direction. This allows for very thin sheets to be produced without fracture. Magnetic properties - In transformers, the texture of grain-oriented silicon steel (GOSS steel) is controlled to minimise core loss.Two textures can be developed in silicon steels: the cube on face or {001}<100> texture in which a {001} plane lies in the sheet plane, or the cube on edge {011}<100> texture (also known as the Goss texture, after its discoverer, Norman P. Goss) in which a {011} plane lies in the sheet plane. The Goss texture arises initially as a small component during rolling, but forms large grains during recrystallisation. By promoting the development of the Goss texture, the magnetic flux density can be increased by up to 30% relative to steel without this texture. Summary = Texture describes the orientation distribution of crystals within a polycrystalline aggregate. This can be measured using x-ray diffraction, EBSD, and neutron diffraction and represented in the form of pole diagrams, inverse pole diagrams, and crystal orientation distribution functions. Texture can arise from processes such as solidification, mechanical deformation, annealing, and in thin films. The presence of texture may be problematic, but if the textures present and their effects are understood, then it can be exploited to great benefit. Questions = ### Quick questions*You should be able to answer these questions without too much difficulty after studying this TLP. If not, then you should go through it again!*1. What is crystallographic texture? | | | | | - | - | - | | | a | The distribution of orientations of crystallites within a polycrystalline sample. | | | b | The orientation of a polycrystalline sample. | | | c | The distribution of orientations of crystallites within a single crystal. | | | d | The orientation of a surface of a sample. | 2. Which of these are limitations of pole figures? (Select all that apply) | | | | | - | - | - | | | a | Specific *hkl* planes cannot be plotted | | | b | Poles in the final plot for a polycrystalline material are not identified with particular crystals | | | c | Information about the crystallite location in a sample is absent | | | d | The orientation of a crystal must be described relative to another |### Deeper questions*The following questions require some thought and reaching the answer may require you to think beyond the contents of this TLP.*3. A sample of rolled a-brass (ccp) has a very strong (110) [1![](images/bar1b.gif)2] texture. Using a Wulff net, sketch stereograms showing the distribution of 111 and 200 poles with respect to the normal to the sheet and the rolling direction. Going further = ### Books A readable book covering measurement and analysis of textures as well as modelling the development of texture during plastic deformation * *Texture and Anisotropy – Preferred orientations in polycrystals and their effect on materials properties* by U.F Kocks, C.N Tomé and H.-R Wenk, Cambridge University Press, Cambridge 1998This book is a good introduction to crystallography and the basics of texture. * *Crystallography and Crystal Defects,* 2nd edition, A. Kelly and K.M. Knowles, John Wiley and Sons, Chichester, 2012 This book covers the mathematical theory behind texture analysis including the construction of CODFs * *Texture Analysis in Materials Science – Mathematical methods* by H. J. Bunge, Butterworths & Co. 1982 Original papers containing charts to determine ideal orientations from crystallite orientation distribution functions * *Charts for analysing crystallite distribution function plots for cubic materials* by G.J Davies, D.J Goodwill, and J.S Kallend (*J. Appl. Cryst*. **4** (1971) 67-70) * *Charts for analysing crystallite orientation distribution function plots for hexagonal materials* by G.J Davies, D.J Goodwill, and J.S Kallend (*J. Appl. Cryst*. **4** (1971) 193-196)
Aims On completion of this TLP you should: * Have an understanding of the basic concepts of crystallography, i.e. lattices, motifs, symmetry elements etc. * Be able to identify lattices and symmetry elements within those lattices. * Know about the different types of unit cell. * Understand the idea of close-packing and packing efficiency. * Be familiar with the different crystal systems and Bravais lattices. Before you start There are no special prerequisites for this TLP. Introduction Crystalline materials are characterised by a regular atomic structure that repeats itself in all three dimensions. In other words the structure displays **translational symmetry**. Translational symmetry is illustrated in this image of the crystal structure of the mineral cordierite (Mg2Al4Si5O18) taken with a high resolution transmission electron microscope. The image is a projection through a very thin slice (~200 Å thick) of the atomic distribution. Black spots correspond to hollow channels through the structure and white spots correspond to regions of high electronic density, arranged around the channels in 6-fold rings (Scale: the distance between the black spots is ~ 9.7 Å). The structure consists of a simple group of atoms that repeats itself periodically in space. This periodicity can be revealed using the concept of a **lattice**. ![Micrograph demonstrating translational symmetry in cordite](images/intro_lattice.png) A. Putnis, *Introduction to Mineral Sciences*, Cambridge University Press, 1992 - frontispiece Lattices Crystalline structures are characterised by a repeating pattern in three dimensions. The periodic nature of the structure can be represented using a lattice. To generate the lattice from any repeating pattern, we choose an arbitrary reference point and examine its environment. We then simply mark in all the points in the pattern that are identical to the chosen reference point. The set of identical points is the lattice, and each point within it is a **lattice point**. ![unit cell example](images/unitcell_example.gif) A. Putnis, *Introduction to Mineral Sciences*, Cambridge University Press, 1992 Note that not all white discs within this pattern are exactly equivalent, and therefore they are not all lattice points. The discs marked with a black spot have different arrangements around them than those that are unmarked (each is surrounded by 3 others in a triangle, but the orientation of the triangles is different).To practice identifying the lattice points within a more complex repeating pattern, try the following game! The brick pattern corresponds to an unusual style of Danish bricklaying where after each normal brick, the next is laid breadthwise,and so on. The original image was taken from the wall of the Centre for Electron Nanoscopy in Copenhagen (thanks to Dr Rafal Dunin-Borkowski). Unit Cell = The structure of a crystal can be seen to be composed of a repeated element in three dimensions. This repeated element is known as the **unit cell**. It is the building block of the crystal structure. We define the unit cell in terms of the lattice (set of identical points). In three dimensions the unit cell is any parallelepiped whose vertices are lattice points, in two dimensions it is any parallelogram whose vertices are lattice points. Of course this definition means that there are an infinite number of possible unit cells. So, in general, the unit cell is chosen such that it is the smallest unit cell that reflects the symmetry of the structure. There are two distinct types of unit cell: **primitive** and **non-primitive**. Primitive unit cells contain only one lattice point, which is made up from the lattice points at each of the corners. Non-primitive unit cells contain additional lattice points, either on a face of the unit cell or within the unit cell, and so have more than one lattice point per unit cell. ![example ofprimitive unit cell](images/unitcell_example2.gif) It is often the case that a primitive unit cell will not reflect the symmetry of the crystal structure. A suitable non-primitive unit cell will be picked in such cases. It was mentioned above that the (eight) lattice points at the corners of the unit cell contribute only one lattice point to the cell. This is because the lattice points at the corners are shared between eight unit cells. Each corner lattice point therefore is equivalent to 1/8 of a lattice point per unit cell. Similarly lattice points on the edge of a unit cell are shared among four unit cells and are worth 1/4 of a lattice point per unit cell. Lattice points on the face of a unit cell are shared between two unit cells and are worth 1/2 of a lattice point per unit cell. Lattice points contained entirely within the unit cell are worth one lattice point per unit cell. The most common types of unit cell are the **primitive**(P) unit cell with one lattice point per unit cell; the **face centred**(F) unit cell with additional lattice points **at the centre of each face** and four lattice points per unit cell; and the **body centred**(I) unit cell with a lattice point in the middle of the unit cell and two lattice points per unit cell. Other cell types are the C face centred unit cell and the rhombohedral unit cell. ![unit cell types](images/unit_cell_type.png) Lattice geometry To define the geometry of the unit cell in 3 dimensions we choose a right-handed set of crystallographic axes, x, y, and z, which point along the edges of the unit cell. The origin of our coordinate system is at one of the lattice points. Lattice parameters The length of the unit cell along the x, y, and z direction are defined as a, b, and c. Alternatively, we can think of the sides of the unit cell in terms of vectors **a**, **b**, and **c**. The angles between the crystallographic axes are defined by: α = the angle between **b** and **c** β = the angle between **a** and **c** γ = the angle between **a** and **b** a, b, c, α, β, γ are collectively known as the **lattice parameters** (often also called ‘unit cell parameters, or just ‘cell parameters). ![Defining the lattice parameters](images/lattice_parameters.gif) Lattice vectors - A lattice vector is a vector joining any two lattice points. Any lattice vector can be written as a linear combination of the unit cell vectors **a**, **b**, and **c**: **t** = U **a** + V **b** + W **c** In shorthand, lattice vectors are written in the form: **t** = [UVW] Negative values are not prefixed with a minus sign. Instead a bar is placed above the number to denote that the value is negative: **t** = −U **a** + V **b** − W **c** This lattice vector would be written in the form: ***t*** = [UVW] Lattice directions are written the same way as lattice vectors, in the form [UVW]. The direction in which the lattice vector is pointing is the lattice direction. The difference between lattice directions and lattice vectors is that a lattice vector has a magnitude which can be shown by prefixing the lattice vector with a constant. By convention U, V and W are integers. ![lattice directions diagram](images/directions.gif) Many crystal systems have elements of symmetry. In these systems, certain sets of directions are symmetrically equivalent to each other. The set of directions that are symmetrically related to the direction [UVW] are written <UVW>. Crystal structure = The structure of a crystal can be described by combining the following elements: the lattice type, the lattice parameters, and the motif. The **lattice type** defines the location of the lattice points within the unit cell. The **lattice parameters** define the size and shape of the unit cell. The **motif** is a list of the atoms associated with each lattice point, along with their fractional coordinates relative to the lattice point. Since each lattice point is, by definition, identical, if we add the motif to each lattice point, we will generate the entire structure: Plan view - Knowing the motif and lattice it is possible to construct a **Plan view** of the crystal structure. The **Plan view** is the standard representation of a crystal structure and is very easy to produce. It is generally the 2D projection looking down the [001]/z-axis of the unit cell. Note this is equivalent to constructing a projection on the (001) plane. Refer to TLP for information on lattice planes. The Plan view generally displays a 2×2 array of unit cells. The heights of the atoms within the unit cell are represented by fractions next to them, the fraction indicating that atom's fractional height in terms of the unit cell height (c) (atoms at the top and bottom of the unit cell have no numbers next to them). On constructing the plan view it is essential to not only indicate heights of atoms within the unit cell but also define the crystallographic axes you are using along with tracing out the unit cell. Close packing and packing efficiency In many cases the atoms of a crystal pack together as tightly as possible. Approximating atoms as hard spheres they will achieve this by forming a **close-packed** structure. This is the case for most metallic structures. ![cubic close packed structure](images/ccp.jpg) The main ideas of close packing are demonstrated in the animation below In a close-packed structure the **close packed directions** are the directions in which atoms are touching. For a **hcp** structure the close packed directions are [100], [010] and [110] and their negatives. Directions that are related by symmetry are represented using the notation <UVW>. The close packed directions for **hcp** are then <100>. ![hexagonal close packed](images/hcp_cpd.gif) The close packed directions for **ccp**, which has a **fcc** unit cell, are along the diagonals of each face, [110], [101], [011]… etc. The set of directions that are related to these by symmetry are the <110> set. ![close packed structure](images/ccp_cpd.gif) Packing Efficiency The packing efficiency of a crystal structure tells us how much of the available space is being occupied by atoms. It is usually represented by a percentage or volume fraction. The packing efficiency is given by the following equation: $${{\left( {number\,of\,atoms\;per\;cell} \right)\*\left( {volume\;of\,one\,atom} \right)} \over {volume\;of\;unit\;cell}}\,$$ The steps usually taken are: * Calculate the volume of the unit cell * Count how many atoms there are per unit cell * Calculate the volume of a single atom and multiply by the number of atoms in the unit cell * Divide this result by the volume of the unit cell The steps are straightforward. The main source of difficulty is expressing the volume of the unit cell in terms of the radii of the atoms (or vice versa). Knowing the close-packed directions makes this step easier for us. The animation below demonstrates how to calculate the packing efficiency of hcp, ccp and bcc structures. Note: If you know the motif, an easy way to find the number of atoms per unit cell is to multiply the number of atoms in the motif by the number of lattice points in the unit cell. Symmetry We have already met the concept symmetry in relation to crystal structures: the lattice generates the translational symmetry—the motif is repeated on every lattice point. Other types of symmetry exist, including: * rotation axes * mirror planes * centre of symmetry * inversion axes (combination of rotation and centre of symmetry operations) An n fold rotational symmetry operation rotates an object by 360°/n. Only n = 1, 2, 3, 4, and 6 are permitted in a periodic lattice ![Examples of n-fold rotational symmetry](images/nfold_symmetry.gif) An object has mirror symmetry if reflection of the object in a plane brings it into coincidence with itself: ![Examples of mirror symmetry](images/mirror_plane.png) Some objects have special symmetry about an origin such that, for any point at position *x*, *y*, *z*, there is an exactly similar point at –*x*, –*y*, –*z*. The origin is called a centre of symmetry ( “inversion centre”). Such an object is said to be centrosymmetric: An n-fold inversion axis is a combination of a rotation by 360/n followed by a centre of symmetry operation. An example of a 4-fold inversion axis is show in the following animation: Combining symmetry Only certain combinations of symmetry operation can exist in a crystal structure. This is because one symmetry element operating on another will generate a third symmetry element in the structure and this can end up generating an infinite number of symmetry elements, as shown in the animation below: In fact, there are only 32 permitted combinations of mirror planes, rotation axes, centres of symmetry and inversion axes. These are known as the 32 **point groups**. Each point group is a finite set of mutually compatible symmetry elements. When the symmetry elements of a point group are operated on each other, they simply generate one of the other elements within the group. Crystal systems = The rotational symmetry of a crystal places constraints on the shape of the conventional unit cell we choose to describe the structure. On this basis we divide all structures into one of 7 crystal systems. For example, for crystals with 4 fold symmetry it will always be possible to choose a unit cell that has a square base with *a* = *b* and γ = 90°: ![4-fold symmetry example](images/4fold_symmetry.gif) There are 14 unique combinations of the 7 crystal systems with the possible types of primitive and non-primitive lattices. These are referred to as the 14 Bravais lattices. Crystal systems, lattices and symmetry elements - <! function openwindow(theURL,winName,features) { //v2.0 window.open(theURL,winName,features); } //> | | | | | | - | - | - | - | | **Crystal System** | **Defining Symmetry** | **Unit Cell Geometry** |   | | Triclinic | Translational Only | a≠b≠c; α≠β≠γ | a | | Monoclinic | A diad axis (parallel to [010]) | a≠b≠c; α=γ=90°; β>90° | a | | Orthorhombic | 3 diads (each should be parallel to each axis) | a≠b≠c; α=β=γ=90° | a | | Trigonal For more information | 1 triad (parallel to [001]) | a=b≠c; α=β=90°; γ=120° **( or** a=b=c; | a | | Hexagonal | 1 hexad (parallel to [001]) | a=b≠c; α=β=90°; γ=120° | a | | Tetragonal | One tetrad (parallel to the [001] vector) | a=b≠c; α=β=γ=90° | a | | Cubic | 4 triads (all parallel to <111> directions) | a=b=c; α=β=γ=90° | a | Bravais Lattice Structures | | | | | - | - | - | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Summary = This TLP has covered the following topics: * The mathematical description of the atomic structure of a crystal. * The different lattice types. * What a lattice vector, unit cell & motif are and how they are relevant to describing crystals. * Close packing and the packing efficiency of a crystal structure. * What symmetry operators are present within a crystallographic lattice and how only specific combinations of such symmetry elements may exist. * Bravais lattice and Point Groups. * How to construct the plan view of a 2×2 unit cell. Questions = ### Quick questions*You should be able to answer these questions without too much difficulty after studying this TLP. If not, then you should go through it again!*1. Which of these is **not** a lattice type ? | | | | | - | - | - | | | a | H | | | b | F | | | c | I | | | d | C | 2. What is a lattice ? | | | | | - | - | - | | | a | A transforming matrix | | | b | A set of atoms | | | c | A group of symmetry elements | | | d | An infinite array of identical points repeated throughout space | 3. How many Bravais lattices (3D) are there ? | | | | | - | - | - | | | a | 36 | | | b | 28 | | | c | 14 | | | d | 16 | 4. How many point groups (3D) are there ? | | | | | - | - | - | | | a | 24 | | | b | 32 | | | c | 12 | | | d | 19 | 5. What is a unit cell ? | | | | | - | - | - | | | a | A unit of volume | | | b | Any parallelepiped with lattice points at its corners | | | c | A parallelepiped containing only one lattice point | | | d | The angle between lattice vectors |### Deeper questions*The following questions require some thought and reaching the answer may require you to think beyond the contents of this TLP.*6. Construct a plan view of NaCl (sodium chloride). NaCl has a **face-centred cubic** lattice. The motif is: Cl @ (0,0,0); Na @ (0,0,1/2); Note 1: The motif coordinates are positions relative to each lattice point Note 2: In a face centred cubic structure the lattice points are located at: (0,0,0), (1/2,1/2,0), (1/2,0,1/2), (0,1/2,1/2) 7. Diamond has a **face centred cubic** lattice. Its motif is C @ (0,0,0), (1/4,1/4,1/4) Construct a plan view of the diamond unit cell. Treating the carbon atoms as hard spheres calculate the packing efficiency of diamond. Note: In a face centred cubic structure the lattice points are located at: Going further = ### Books * A. Putnis, *Introduction to Mineral Sciences*, Cambridge University Press, 1992 * Feynman, Leighton, Sands, *Lectures on Physics Vol. II, Chapter 30*,  Addison-Wesley Publishing Company, 1964   [Note: This book follows a different convention when referring to body-centred and face-centred structures] * C. Hammond, *The Basics of Crystallography and Crystallography*, III Ed, Oxford University Press, 2009 * B.D  Cullity, S.R Stock, *Elements of X-Ray Diffraction*, Prentice Hall, 2001 * McKie, McKie, *Essentials of Crystallography*, Blackwell Scientific Publications, 1986 Website of **International Tables for Crystallography**
Aims On completion of this TLP you should: * Understand what processes determine the elastic behaviour of a honeycomb structure; * Know what processes cause the onset of yielding; * Understand how these ideas for simple structures might be extended to more complex ones, such as foams. Before you start There are no special prerequisites for this TLP. Introduction There is an important class of materials, many biological, that are highly porous and made by bonding rods, ribbons or fibres in both regular and irregular structures. These include paper, bone, wood, packaging foam and insulating fibre mats and they can be made of polymers, metals, ceramics and natural materials. Despite the different structures and materials, there are many similarities in how they behave. An important class of these materials is where the rods or ribbons form cells, co-called cellular structures. Here we explain how such structures deform in compression. To understand the deformation processes more easily the behaviour of a regular honeycomb structure is described before extending the ideas to structures such as foams. Compression of a honeycomb: Experimental The honeycomb studied here is an array of regular hexagonal cells, with the cell walls made of thin strips of aluminium. The structure is not quite as simple as it first appears because of the way in which it is made (). This causes one in every three cell walls to consist of two layers of aluminium bonded with adhesive, making some walls stiffer than others. In the honeycomb used here the thickness of an individual sheet, *t*, was 0.09 mm, and the length of each of the cell faces, *l*, was 6.30 mm. The relative density, the measured density as a fraction of the density of the solid material, is 0.008. There are two different directions in which a hexagonal honeycomb can be compressed in the plane of the honeycomb. | | | | - | - | | Diagram of cell walls parallel to the lodaing axis | Diagram of cell walls diagonal to the loading axis | | Cell walls lie parallel to the loading axis | Cell walls lie diagonal to the loading axis | Although quantitatively different, the basic deformation processes are similar in both cases so only the situation where some cell walls lie parallel to the loading axis is described here. Squares of honeycomb with 6 cells along each side were cut from a sheet of material. The samples were then compressed between flat, parallel platens at a constant displacement rate of 1 mm min-1, giving the stress-strain curve below. The stress-strain curve has 3 distinct regions: * an initial *elastic region*; * followed by the onset of irreversible leading to region where the stress does not change with increasing strain, known as the *plateau region*; * and lastly a region where the stress again begins to rise rapidly with increasing strain, known as *densification*. #### Elastic region The elastic region is characterised by the effective Young modulus of the material, that is the Young modulus of a uniform material that for the same imposed stresses gives rise to the same strains. This is found by taking the slope of the unloading curve, which helps to reduce the effects of any local plastic flow. The measured Young modulus was 435 kPa. #### Plateau region With continued loading the stresses in the faces increase. Eventually these reach the flow stress of the material and irreversible yielding begins. The stress at which this occurred was approximately 15 kPa. The structure then began to collapse at an approximately constant stress, σP. #### Densification This continued up to a strain of ~ 0.7, at which point the stress started to rise much more rapidly as the faces from opposite sides of the cells were pressed up against one another. This three-stage deformation behaviour is typical of virtually all highly porous materials, even ones made of very brittle materials. The next step is to try and quantitatively describe the observed behaviour. Elastic behaviour (I) = To start let us assume that the predominant contribution to the elastic strain comes from the axial compression of the vertical struts, as shown below. We can estimate the magnitude of this strain as the cross-sectional area of solid material, AV, in a cut across just the vertical faces is less than that if the material were completely solid by the ratio of the cell wall thickness, t, to the horizontal distance across each cell, 2 l cos θ. As t << l cos θ, this is given by \({A\_{\rm{V}}} = t/2l\;\cos \theta \) Using the measured values of t ( = 0.09 mm) and l ( = 6.30 mm), and taking θ = 30° as the cells are hexagonal gives AV as 0.008. Taking the Young modulus of aluminium as 70 GPa, this predicts the Young modulus of the honeycomb to be 560 MPa. This is greater than the observed value of 435 kPa by more than 3 orders of magnitude and shows that axial compression of the vertical faces makes a negligible contribution to the elastic strain. Elastic behaviour (II) If the axially loaded faces do not deform, then clearly it must be those at an angle to the loading axis that do, the diagonal faces. And because they are at an angle to the loading axis they will bend. The bending of each face beams must be symmetrical about the mid-point of the face and can be estimated using beam bending theory. To do this each face is described as two beams, cantilevered at the vertices of the hexagonal cell and loaded at the centre point. Note that one beam (i.e. a half cell wall) is pushed upward, the other downward. This [] gives the Young modulus of the honeycomb as \[E = \frac{4}{{\sqrt 3 }}{E\_{\rm{S}}}{\rm{ }}{\left( {\frac{t}{l}} \right)^3}\] Using the measured values of t( = 0.09 mm) and l ( = 6.30 mm) and taking ES as 70 GPa, the elastic modulus is predicted to be 471 kPa. This is within 10% of the measured value for the sample. The predominant contribution to the Young modulus is therefore from the bending of the diagonal faces. Yielding and plateau behaviour The aluminium honeycomb will start to plastically deform if the stress in the faces anywhere exceeds the flow stress, *σ*Y, of the aluminium cell wall. We have already shown that the predominant contribution to the elastic strain is the bending of the diagonal faces. (). Furthermore we could estimate this if each face were considered to be made up of two beams, each of length *l*/2, cantilevered at the end connected to the vertical cell wall and acted upon by a force of magnitude *F* cos *θ*, where *F* is the force applied at the ends of the sample and *θ* is the angle between the diagonal face and the horizontal. It is clear then that the stress will be a maximum where the moment is greatest, that is at the vertices of the hexagonal cells. It can be shown that the applied stress, σ, when the maximum stress in each face reaches the flow stress, *σ*Y, of the material making up the cell walls is given by () \[\sigma = \frac{4}{9} \cdot {\left( {\frac{t}{l}} \right)^2}{\sigma \_{\rm{Y}}}\] Using the measured values of *t* ( = 0.09 mm), *l* ( = 6.30 mm) and *σ*Y ( = 100 MPa), predicts the yield strength of the honeycomb, *σ*, to be 9 kPa, somewhat lower than the measured value of 15 kPa. The stress we have estimated is the stress at which plastic flow will start in the outer surfaces at the cantilevered point. To enable plastic flow to spread through the thickness of the cell face requires that the stress is increased further by a factor of 1.5, giving a macroscopic flow stress of 13.5 kPa, much closer to the measured value. Once the material has started to yield the cell walls begin to collapse. This occurs at an approximately constant stress until the cell walls impinge on one another when the stress begins to rise more rapidly with increasing strain. Densification = As the honeycomb yields in the plateau region, the regular hexagonal cell with a height (*l* + 2*l* sin *θ*) changes shape with the protruding apices being pressed toward one another to give cells with the shape of a bow-tie and a height *l*. ![Image of a bow tie](images/bow_tie.jpg) If the cells deform uniformly then the strain at which this occurs, *ε*D, is given by \[{\varepsilon \_{\rm{D}}} = \ln \left( {\frac{l}{{l + 2l\sin \theta }}} \right) = \ln \left( {\frac{1}{{1 + 2\sin \theta }}} \right)\] Note true strain is used because the strains are large and compressive. As *θ* = 30°, *ε*D, is predicted to have a magnitude of 0.7. Further increases in strain cause opposing cell walls to be pressed against one another and the stress required for further deformation increases rapidly. As can be seen in the stress strain curve below this prediction gives good agreement with the observed stress-strain curve. It can be seen that this collapse does not occur uniformly throughout the whole structure, but layer by layer of cells. This behaviour is rather dependent on the size of the cell compared to that of the sample. Increasing the number of cells in a cross-section causes the behaviour to become more uniform as might be expected. It is now possible to quantitatively understand the entire stress-strain behaviour of a simple honeycomb. The next step is to extend these ideas to less regular structures, such as foams and fibrous structures. Other porous structures = Many other porous structures show the same type of stress-strain behaviour. The basic reasons are similar. The initial behaviour is elastic, until the stresses in the struts reach their flow or fracture stress. There is then a plateau region as the cells collapse, until the struts from opposite sides of the cells impinge on one another and the applied stress increases more rapidly. However the details can be very different. For instance the struts in ceramic foams tend to break, but a plateau region is still seen. Many combinations of material and cell structure are possible. Seeing how the cells deform in a foam is more difficult than in the simple honeycomb. However this has been done using X-ray tomography as shown in the short video. Deformation of cells in foam Your browser does not support the video tag. Deformation of cells in foam (For further details see J.A. Elliott et al, “*In-situ* deformation of an open-cell flexible polyurethane foam characterised by 3D computed microtomography”, *J. Mater. Sci.* **37** (2002) 1547-1555.) Looking at the large cell on the right-hand side, it is clear that the deformation of the foam is similar to the honeycomb and the strain comes predominantly from the bending of the struts transverse to the loading axis. For simplicity, consider the open-cell foam as having a cubic unit cell as shown below.| | | | - | - | | Diagram of cubic foam structure | Open cell bending | Note that each transverse strut has a vertical strut half-way along it, so that axial loading causes the struts transverse to the loading axis to bend, as shown in the diagram above. The Young modulus can now be estimated in a similar way to that for the honeycomb, except that the struts are assumed to have a square cross-section, rather than being rectangular as before and θ, the angle between the transverse strut and the horizontal is 0. This gives an expression for the relative Young modulus, E/ES, as \[\frac{E}{{{E\_{\rm{S}}}}} = k{\rm{ }}{\left( {\frac{\rho }{{{\rho \_{\rm{S}}}}}} \right)^2}\] where E is the Young modulus of the porous material and ES that of the solid material and k is a numerical constant, experimentally found to approximately equal to 1 (). ![Graph of data from A Proc Roy Soc 1982 figure 9](images/Figure9.gif) As can be seen in the graph above, experiments show this is correct for isotropic, open-cell foams and even appears to be obeyed where the struts are not slender beams and also, at least approximately, where the cells are closed rather than open. This is thought to arise because in most closed-cell foams most of the material is still along the edges of the cells, rather than being uniformly distributed across the faces. (The data is taken from various sources cited in L.J. Gibson and M.F. Ashby, "On the mechanics of three-dimensional cellular materials", *Proc. Roy. Soc. A*, **382**[1782] (1982) 43-59.) Porous structures in bending <! .style1 { font-family: "Times New Roman", Times, serif; font-style: italic; } >Porous structures are often used as a lightweight core separating two strong, stiff outer layers to form a sandwich panel. Like the I-beam, such structures have a greater resistance to bending per unit weight of material than a solid beam and so are useful where weight-saving and stiffness are important. Typical applications include flooring panels in aircraft or rotors in helicopter blades. Sandwich structures are also common in biological structures, such as leaves or spongy bone. | | | | - | - | | Aluminium foam sandwich panel | Dried bone specimen | | From J. Banhart, Manufacture, Characterisation and Application of Cellular Metals and Metal Foams, *Progress Mater. Sci.*, 2001, 46, pp.559-632. | From *Cell Biology* by Thomas D. Pollard and William C. Earnshaw, Saunders 2004, pp.540 (Figure 34-4), courtesy of D.W. Fawcett, Harvard Medical School. | However some porous solids, such as wood are used without the stiff, strong outer layers. We might ask whether it is generally true, or under what conditions a porous rod will be stiffer in bending (i.e. give a smaller deflection for a given applied force) than a solid rod of the same length and overall mass. Consider two rods one is porous and the other is solid. As each rod has the same length and mass and a circular cross-section, the porous one must have a larger radius. The deflection, δ, of a cantilevered beam of length L under an imposed force W is given by \[\delta = \frac{1}{3}\frac{{W{L^3}}}{{EI}}\] For given values of W and L an increase in beam stiffness requires a higher value of the product EI. For a beam of circular cross-section <I = πr4/4. The porous beam has a larger radius and therefore a larger moment of area than the solid beam. However the porous beam also has a lower Young modulus. For the porous beam to be stiffer in bending, the rate at which the moment of area increases with radius must therefore be greater than the rate at which the Young modulus decreases. If the density of the porous beam is ρ and solid beam is ρS and both have the same length and mass, then the ratio of the radius of the porous beam, r, to that of the solid beam, rS, is \[\left( {\frac{{{\rho \_{\rm{S}}}}}{\rho }} \right) = {\left( {\frac{r}{{{r\_{\rm{S}}}}}} \right)^2}\] As *I ∝ r*4 the ratio of the second moments of area of the porous and solid beams, *I* and *I*S respectively, is \[\frac{I}{{{I\_{\rm{S}}}}} = {\left( {\frac{{{\rho \_{\rm{S}}}}}{\rho }} \right)^2}\] In other words *I*/*I*S increases as the inverse square of the relative density, *ρ*/*ρ*S. Now the expression derived above for the elastic modulus of an open-cell porous body was \[\frac{E}{{{E\_{\rm{S}}}}} = k{\rm{ }}{\left( {\frac{\rho }{{{\rho \_{\rm{S}}}}}} \right)^2}\] That is E/ES decreases as the square of the relative density. In other words although *I* is increasing with decreasing density, E is decreasing at the same rate. In this case there would be no advantage in using such a material in bending compared with the solid material. As nothing can be done about the change in radius, and hence I, with relative density, a higher bending stiffness can only be obtained by ensuring that E/ES varies with ρ/ρS by a power less than 2. This is the case for the of wood where the exponent lies closer to 1 rather than 2, as shown below. ![Graph of data from A Proc Roy Soc 1982 figure 10](images/Figure10.gif) (The data is taken from K.E. Easterling et al, “On the mechanics of balsa and other woods”, *Proc. Roy. Soc. A*, **383**[1784] (1982) 31-41.) Such changes can be brought about by varying the cell structure, for instance by elongating the cells, as occurs in wood. However in the **transverse** () directions *E*/*E*S for wood decreases much more rapidly with decreasing *ρ*/*ρ*S. Here the exponent lies between 2 and 3. Summary = In this TLP, the elastic, yielding and densification behaviour of a simple honeycomb structure has been studied experimentally. It is shown that the deformation of a honeycomb structure is made up of 3 main regions: an elastic region, which ends when the maximum stress in the cell faces becomes equal to the flow stress of the material, allowing the cells compact at a constant stress, followed by a region in which the load rises rapidly with increasing strain, as the honeycomb is compacted. Quantitative descriptions of the behaviour have been derived and compared with the experimental measurements. These show that the deformation behaviour of a honeycomb is determined not by the axial compression of those faces parallel to the loading axis, but by the bending of faces lying at some angle to the loading axis. It has been shown that these ideas can be extended to describe the deformation behaviour of more irregular structures, such as foams. For foams that are isotropic and have open cells, it is predicted that the relative elastic modulus varies with the square of the relative density, consistent with observations in the literature. The uses of such structures are described and it is shown that in isotropic open-cell foams, sandwich structures are required to obtain improved specific stiffness in bending. The enhanced stiffness of cellular structures such as wood arises from modifications to the structure that gives a different dependence of the relative stiffness on the elastic modulus. Questions = ### Quick questions*You should be able to answer these questions without too much difficulty after studying this TLP. If not, then you should go through it again!*1. How does elastic deformation of a honeycomb with hexagonal cells with some faces aligned parallel to the loading axis take place? | | | | | - | - | - | | | a | By elastic compression of the vertical faces | | | b | By elastic buckling of the vertical faces | | | c | By elastic bending of the diagonal faces |### Deeper questions*The following questions require some thought and reaching the answer may require you to think beyond the contents of this TLP.*2. How might elongating the cells of a hexagonal honeycomb in the direction of loading change *E*/*E*S for a given relative density? | | | | | - | - | - | | | a | Decrease it | | | b | Have no effect | | | c | Increase it |### Quick questions*You should be able to answer these questions without too much difficulty after studying this TLP. If not, then you should go through it again!*3. What is the criterion for the onset of yielding in a honeycomb? | | | | | - | - | - | | | a | That somewhere the stress should exceed the flow stress through the thickness of the film | | | b | That the maximum stress in the face exceeds the material flow stress | | | c | That the stress at the centre of the face should exceed the material flow stress |### Deeper questions*The following questions require some thought and reaching the answer may require you to think beyond the contents of this TLP.*4. For a highly porous structure *E*/*E*S is proportional to (ρ/ρS)n. In which case will the structure show an increased bending stiffness? Hence explain why many porous materials are often used as the core in sandwich structures. | | | | | - | - | - | | | a | n > 2 | | | b | n = 2 | | | c | n < 2 | 5. Consider a honeycomb loaded with some faces parallel to the compressive loading direction, in which the shape of the hexagonal cell is such that θ < 0, how would the material deform elastically in the transverse direction? | | | | | - | - | - | | | a | contract inwards | | | b | no tranverse movement | | | c | expands outwards | Going further = ### Books * L.J. Gibson and M.F. Ashby, *Cellular solids: structure and properties*, Cambridge University Press, 2nd edition (1997). Covers honeycombs and foams, both open and close celled, as well as the effects of gases and liquids in the cells. It also discusses the properties of bone, wood and the iris leaf as highly porous solids. * K.K. Chawla, *Fibrous materials*, Cambridge University Press, 2nd edition (1998). Covers fibrous and some woven structures. * D. Boal, *Mechanics of the Cell*, Cambridge University Press, 2002. See chapter 3 on two-dimensional networks. ### Websites * Website on amusing elastic properties.  
Aims By the end of this TLP, you should be able to: * Explain how Evaporation can be used as a deposition technique, and know what external factors have an effect. * Know how temperature affects the necessary vapour pressure of the material for evaporation to occur. * Describe the basic Sputtering technique, and the difficulties which it presents. * Discuss the complications which arise when the film is to be made of an alloy or compound. * Explain the process of laser ablation and how it can improve on other Physical Vapour Deposition (PVD) techniques. * Know how energy contributes to the structure and properties of the final film. * Describe the 3 basic growth modes and what conditions favour each. Before you start There are no specific requirements. The are covered here. It may also be useful to look at this . Introduction In this TLP, thin films will be defined as solid films formed from a vapour source: *Built up as a thin layer on a solid support (substrate) by controlled condensation of individual atomic, molecular, or ionic species.* This can lead to structures which are far from equilibrium so understanding how different deposition techniques can lead to different growth mechanisms and structures is essential in being able to control the properties of the film. The applications of these films can vary depending on the thickness, among other properties: * **Low thickness** + Optical interference effects + Electron tunnelling + High resistivity * **High surface to volume ratio** + Gas absorption + Diffusion + Catalytic activity * **Microstructural control** + High hardness + Optical absorption + Corrosion protection Because of their many controllable aspects, thin films play a major role in microelectronics, communications, protective coatings, optics and the medical industry. There is always continued pressure for advances in size reduction, uniformity, purity, reproducibility and manufacturing speed. One of the main methods for forming these films is through Physical Vapour Deposition (PVD). In PVD a vapour is generated from a source and travels to a substrate, where there is nucleation and growth of the solid film materials. The two main ways of forming this vapour flux are through Evaporation and Sputtering. A common, general set up for this process is shown schematically below: ![](images/PVD.svg) Simple deposition will lead to amorphous film material unless the depositing atoms have enough energy to rearrange themselves into a more thermodynamically stable crystalline structure. If the film growth is epitaxial then it is forming on a lattice-matched crystalline substrate and so will be crystalline from the start. This is a multidisciplinary subject which includes vacuum engineering, fluid dynamics, plasma physics and molecular simulations. Evaporation theory One method of generating the vapour flux is through evaporation (heating a solid or liquid source). For evaporation to occur the heating must lead to sufficient vapour pressure (typically between 0.1-1 Pa). This often requires melting of the source, but not necessarily. The Clausius-Clapeyron equation for solid-vapour and liquid-vapour equilibrium is an approximation (based on the vapour being a perfect gas) that is often used as a starting point to describe the connection between temperature and pressure. \[ \frac{{{\rm{d}}P}}{{{\rm{d}}T}} = \frac{{\Delta H\left( T \right)}}{{T\Delta V}} \] \[\ln P \cong - \frac{{{\rm{\Delta }}{H\_e}}}{{RT}} + c\] The plot below is based on this, and gives the vapour pressure of the material at a given temperature. Click on the line that you want to analyse and use the scroll bar to see how far you can reduce the temperature whilst keeping an effective vapour pressure. As you can see, a small variation in temperature leads to a large change in vapour pressure. Vapour flux (atom arrival rate per unit area per unit time) is linked to the vapour pressure, so we need very precise temperature monitoring in order to control the vapour flux and hence the film growth rate. This process is carried out in a vacuum, and the evaporated atoms have relatively low energies (~ 0.1-0.3 eV) Evaporation Techniques Simple Resistance Evaporation - The source (known as the charge) is held in an electrically conductive boat or crucible, supported in a coil, or wrapped around a rod. This support is then heated by passing a current through it. ![](images/Evaporation.svg) This method is reliable and relatively cheap due to the lack of complex components. However, the heating of the support can lead to desorption / evolution of impurities which will be incorporated into the growing film. There is also limited control over the temperature of the charge, and hence the deposition rate, so this technique is most widely used for non-critical applications. Electron Beam Evaporation - In this scenario, the charge is heated directly using an electron beam. ![Electron beam evaporation](images/Electron_Beam.svg) This can lead to higher purity films since the crucible / support is not heated and may be water-cooled. There may also be some ionisation or activation of the depositing vapour flux as it passes through the electron beam. Sputtering Sputtering is an etching process. The source (known as the target) is bombarded with a high energy species, leading to the ejection of a vapour flux. Sputter deposition therefore uses this flux as the vapour source for film growth. It principally consists of atoms, with a range of energies, travelling away from the target at random angles. Sputtering is a purely physical process and is most simply modelled by assuming elastic binary collisions. The Sputter Yield (S) is defined as the average number of sputtered atoms per collision and it is affected by many variables. Forming an exact relationship between these parameters and the sputter yield is very difficult. Some of the contributing factors are: * The momentum and energy transfer coefficients * The temperature of the target * The bond strength of the target atoms * The incident particle energy * The incident angle at the target surface Typically, the target is bombarded with noble gas ions such as Argon (with energies in the range of 100-500eV). In this case, it is found that S~1 for most metals. DC Glow Discharge Sputtering The most common method of sputter deposition uses a self-sustaining discharge in a low pressure inert gas. The sputtering target is the cathode and ejects secondary electrons. These collide with the inert gas atoms, which become positively charged and accelerate towards the target. This causes sputtering on impact. ![DC glow discharge sputtering](images/DC_Glow_3.svg) In order to maintain the discharge, the gas pressure needs to be high enough that the secondary electrons collide with and ionise gas atoms before they are lost to the surroundings. The sputtered flux then has to travel through this gas in order to reach the substrate. This scatters it, meaning that the setup is quite inefficient. The following animation shows the different stages of the sputtering process:  If the required target is not electrically conductive, then a Radio Frequency voltage can be used to develop a negative potential on an insulating target surface. Magnetron Sputtering The principle used here is to add a magnetic field at the target surface. This means that the secondary electrons which are ejected from the target are in a region of crossed electric and magnetic fields, leading to cycloidal motion. This traps the electrons near the target surface, prolonging their residence time and enhancing the probability of collisions such that a denser discharge can be maintained down to lower pressures. This greatly increases the deposition rates. | | | | - | - | | Magnetron sputtering | | Due to the nature of the magnetic field, the electrons are trapped within a specific region on the target, and so this is where sputtering occurs most heavily. This leads to a distinctive ‘racetrack region where the target is worn down much faster. There are many possible magnetron geometries, but the rectangular planar one shown above is very common. Cylindrical cathodes allow uniform erosion of the target surface, improving material utilisation. Comparisons and Complications = Energy As seen above, the evaporated atoms will have relatively low energies which correspond mainly to their thermal energies. This is typically between 0.1-0.3 eV. However, sputtered atoms can have a much larger range of energies, from 5-50eV. This large range can have an effect on the uniformity of the film. The effect of the energy of the incoming (source) species will be discussed in the next section. Alloys and Compounds If an alloy source is used, then the different components will have different vapour pressures and so the vapour fluxes will not be equal. One way of avoiding this is to use co-evaporation from multiple charges, but this rapidly becomes difficult to control and maintain accurately. Compounds may be evaporated directly, but the high temperatures encourage dissociation. Sputter deposition is much more flexible as using multiple targets, alloy or compound targets is more feasible. Another effective method for compound film growth is the use of Laser Ablation (also known as Pulsed Laser Deposition). In this scenario, the vapour flux is created by firing high energy, focused laser pulses onto the surface of the target. This produces a plume of ablated target material. The laser typically has a wavelength of 200-300nm, and the pulse lasts between 6-12ns. The most efficient plume production occurs when the laser strikes the target surface at 45 degrees. The speed of the heating process means that all the components evaporate simultaneously – avoiding fractionation. ![PLD image](images/PLD.svg) Laser Ablation simply requires a vacuum chamber, a support for the target, and a window for the laser. Unfortunately, the laser can be quite expensive and difficult to scale up to industrial applications.   Growth Modes When using evaporation, the flux travels in a straight line and so the deposition is line-of-sight only. This can lead to macroscopically shadowed regions, which are reduced by rotating the substrate during the deposition process. ![Shadowing image](images/Shadowing_2.svg) When sputtering, the deposition is not line of sight (due to scattering in the intervening gas) but shadowing still occurs on an atomic scale. Variables which affect the degree and orientation of the shadowing are, among others, the background pressure in the chamber and the average angle from which the sputtered atoms approach. If the atoms have low energies then they will remain in their initial positions. If the atoms have higher energies, then their thermal mobility may be high enough for surface diffusion. This allows the atoms to rearrange themselves into a lower energy conformation. A typical energy threshold to mobilise atoms on the surface is around 5eV. Hence, sputtered atoms typically have enough energy for surface diffusion to take place, whilst evaporated ones dont. If the energy is high enough for surface diffusion to take place then once the stable nuclei form on the substrate surface, they may coalesce. This leads to 3 possible modes of film growth. Summary = Thin films can be constructed by the deposition of vapourised atoms onto a substrate surface. There are 2 principle methods of generating the flux for Physical Vapour Deposition (PVD)  – Evaporation and Sputtering. Both of these involve the use of a vacuum chamber. | | | | - | - | | Evaporation * Thermal Process * High Vacuum * Low energy vapour flux * Line of sight deposition | Sputtering* Physical Process * Background of Inert Gas * High energy vapour flux * Scattering of atoms | If the incoming vapour flux has low energy, then the resultant film will have shadowed regions. However, if there is enough energy for surface diffusion to occur then the film will rearrange into one of 3 possible structures – depending on the relative bonding energies between the film and substrate atoms. Deposition of films made from alloys or compounds are more difficult, as the different elements will not necessarily react to the environment in the same way. One way around this issue is a process known as Pulsed Laser Deposition (PLD). This involves heating the source material instantaneously using a focussed laser in order to avoid fractioning. Questions = ### Quick questions*You should be able to answer these questions without too much difficulty after studying this TLP. If not, then you should go through it again!*1. Why are substrates often rotated during the deposition process? | | | | | - | - | - | | | a | To increase the speed of deposition. | | | b | To allow compounds to be deposited. | | | c | To help give a uniform thickness. | | | d | To help cool the substrate surface. | 2. What is an advantage of using electron beam evaporation over resistive heating evaporation? | | | | | - | - | - | | | a | Lower contamination levels from the heating vessel. | | | b | Lower cost. | | | c | Increased speed of deposition. | | | d | Compounds can be evaporated. | 3. Why is argon often used as the gas in the sputtering set up? | | | | | - | - | - | | | a | It is reactive. | | | b | It is inert. | | | c | It is heavy. | | | d | It is easily ionised. | 4. What effect does changing the angle of deposition have on the film? | | | | | - | - | - | | | a | Makes the coating less uniform. | | | b | Makes the coating more uniform. | | | c | ‘Tilts the shadowed regions and columns to a different angle. | | | d | No effect. | 5. What conditions are most likely to lead to Layer Growth? | | | | | - | - | - | | | a | Film atoms are more strongly bound to each other than to the substrate. | | | b | Film atoms are more strongly bound to the substrate than to each other. | | | c | Low atomic energy (<5eV). | | | d | High misfit strain between the substrate and the film. |### Open-ended questions*The following questions are not provided with answers, but intended to provide food for thought and points for further discussion with other students and teachers.*6. Explain how Pulsed Laser Deposition allows coating of precise compositions of alloys and compounds. Going further = ### Books M. Ohring, *The Materials Science of Thin Films*, Academic Press, 1992, ISBN: 0-12-524990-X
Aims On completion of this TLP you should: * Understand the meaning of the terms dielectric constant, dielectric loss and dielectric breakdown. * Recognise that the properties of dielectrics are due to polarisation and understand how this polarisation arises on the microscopic scale. * Understand how material structure, temperature and frequency affect the properties of dielectrics. * Be aware of some practical applications of dielectric materials. Before you start There are no specific prerequisites for this TLP. Introduction A dielectric material is any material that supports charge without conducting it to a significant degree. In principle all insulators are dielectric, although the capacity to support charge varies greatly between different insulators for reasons that will be examined in this TLP. Dielectric materials are used in many applications, from simple electrical insulation to sensors and circuit components. Electric dipoles A dielectric supports charge by acquiring a polarisation in an electric field, whereby one surface develops a net positive charge while the opposite surface develops a net negative charge. This is made possible by the presence of electric dipoles – two opposite charges separated by a certain distance – on a microscopic scale. A mathematical treatment of dipole moment can be found in the . For the purposes of this TLP it is worth noting that a dipole can be considered in two ways: 1. If two discrete charged particles of opposite charges are separated by a certain distance, a dipole moment **μ** arises. ![Equation for dipole moment](images/dipole.gif) 2. If the centre of positive charge within a given region and the centre of negative charge within the same region are not in the same position, a dipole moment **μ** arises. For example, in the diagram below the centre of positive charge from the 8 cations shown is at X, while the centre of negative charge is located some distance away on the anion. ![Equation for dipole](images/dipole2.gif) The second view of dipole moment is more useful, since it can be applied over a large area containing many charges in order to find the net dipole moment of the material, and can also be used in situations where it is inappropriate to consider the charges as belonging to discrete particles – e.g. in the case of the electron cloud that surrounds the nucleus in an atom, which must be described by a wavefunction. Note that in the equation for dipole moment, **r** is a vector (the sign convention is that **r** points from negative to positive charge) therefore the dipole moment **μ** is also a vector. The polarisation of a material is simply the total dipole moment for a unit volume. \[P = \frac{{\sum \mu }}{V}\] where V is the overall volume of the sample. Since Σ**μ** is a vector sum, a material may contain dipoles without having any net polarisation, since dipole moments can cancel out. Polarisation mechanisms =There are three main polarisation mechanisms that can occur within a dielectric material: electronic polarisation, ionic polarisation (sometimes referred to as atomic polarisation) and orientational polarisation. The animation below illustrates how each of these mechanisms functions on the microscopic scale. All non-conducting materials are capable of electronic polarisation, which is why all insulators are dielectric to some degree. In contrast, the ionic and orientational modes are only available to materials possessing ions and permanent dipoles respectively. Another contribution to polarization is space charge, or the accumulation of mobile charges at structural surfaces and interfaces. Rather than being a direct property of a material this is only a feature of heterostructures, and hence is not discussed further here. Capacitors A capacitor is a device used for storing charge. It normally consists of two conducting plates with a dielectric material between them, although an “empty capacitor” – one with a vacuum between the plates – may also be used in some applications. Each capacitor has a capacitance C, the standard units of which are Farads (F). The capacitance is defined by the relationship Q = C V where Q is the charge on each capacitor plate and V is the voltage between capacitor plates. Therefore 1 F = 1 CV-1. ![Diagram of a capacitor](images/capacitor.jpg) The capacitance is affected by various factors, such as the capacitor geometry, however here we shall only deal with the effect of the dielectric material chosen to occupy the space between the plates. Increasing the capacitance in this way is desirable, since it allows a greater electric charge to be stored for a given field strength. The dielectric constant = The dielectric constant of a material provides a measure of its effect on a capacitor. It is the ratio of the capacitance of a capacitor containing the dielectric to that of an identical but empty capacitor. An alternative definition of the dielectric constant relates to the permittivity of the material. Permittivity is a quantity that describes the effect of a material on an electric field: the higher the permittivity, the more the material tends to reduce any field set up in it. Since the dielectric material reduces the field by becoming polarised, an entirely equivalent definition is that the permittivity expresses the ability of a material to polarise in response to an applied field. The dielectric constant (sometimes called the ‘relative permittivity) is the ratio of the permittivity of the dielectric to the permittivity of a vacuum, so the greater the polarisation developed by a material in an applied field of given strength, the greater the dielectric constant will be. There is no standard symbol for the dielectric constant – you may see it referred to as κ*, ε, ε*′ or *ε*r. In this TLP *κ* shall be used to avoid confusion with the absolute permittivity, which may also be given the symbol *ε*. The two definitions of the dielectric constant are illustrated by the diagram below (the green arrows represent the electric field). ![Two diagrams illustrating definitions of dielectric constant](images/dielectric-constant.jpg) In general, the more available polarisation mechanisms a material possesses, the larger its net polarisation in a given field will be and hence the larger its dielectric constant will be. The dielectric constant of a material and its refractive index are closely linked by the equation κ = *n*2 (click for derivation). However, care must be taken in applying this equation. It is only strictly accurate when the dielectric constant and the refractive index are measured under the same conditions. Specifically, since the dielectric constant can vary significantly with frequency (for reasons discussed in the next section of this TLP), we must measure the dielectric constant under alternating current at the same frequency that we measure the refractive index at – the frequency of visible light, ~1015 Hz. However, quoted values of the dielectric constant normally refer to the static dielectric constant – that is, the dielectric constant under direct current. This is often very different from the value of the dielectric constant at 1015 Hz. The exception to this is for materials that possess only the electronic mode of polarisation. For these materials, the dielectric constant does not vary significantly with frequency below visible frequencies, and κS ≈ *n*2 where κS is the static dielectric constant. To summarise: the equation κ = *n*2 can be applied to the static dielectric constants of non-polar materials only, or to the high-frequency dielectric constants of any dielectric. Variation of the dielectric constant in alternating fields We know that a dielectric becomes polarised in an electric field. Now imagine switching the direction of the field. The direction of the polarisation will also switch in order to align with the new field. This cannot occur instantaneously: some time is needed for the movement of charges or rotation of dipoles. If the field is switched, there is a characteristic time that the orientational polarisation (or average dipole orientation) takes to adjust, called the relaxation time. Typical relaxation times are ~10-11 s. Therefore, if the electric field switches direction at a frequency higher than ~1011 Hz, the dipole orientation cannot ‘keep up with the alternating field, the polarisation direction is unable to remain aligned with the field, and this polarisation mechanism ceases to contribute to the polarisation of the dielectric. In an alternating electric field both the ionic and the electronic polarisation mechanisms can be thought of as driven damped harmonic oscillators (like a mass on a spring), and the frequency dependence is governed by resonance phenomena. This leads to peaks in a plot of dielectric constant versus frequency, at the resonance frequencies of the ionic and electronic polarisation modes. A dip appears at frequencies just above each resonance peak, which is a general phenomenon of all damped resonance responses, corresponding to the response of the system being out of phase with the driving force (we shall not go into the mathematical proof of this here). In this case, in the areas of the dips, the polarisation lags behind the field. At higher frequencies the movement of charge cannot keep up with the alternating field, and the polarisation mechanism ceases to contribute to the polarisation of the dielectric.As frequency increases, the materials net polarisation drops as each polarisation mechanism ceases to contribute, and hence its dielectric constant drops. The animation below illustrates these effects. At sufficiently high frequencies (above ~1015 Hz), none of the polarisation mechanisms are able to switch rapidly enough to remain in step with the field. The material no longer possesses the ability to polarise, and the dielectric constant drops to 1 – the same as that of a vacuum. The resonances of the ionic and electronic polarization mechanisms are illustrated below. Effect of structure on the dielectric constant We have already seen that the more available polarisation mechanisms a material possesses, the larger its dielectric constant will be. For example, materials with permanent dipoles have larger dielectric constants than similar, non-polar materials. In addition, the more easily the various polarisation mechanisms can act, the larger the dielectric constant will be. For example, among polymers, the more mobile the chains are (i.e. the lower the degree of crystallinity) the higher the dielectric constant will be. For polar structures, the magnitude of the dipole also affects the magnitude of polarisation achievable, and hence the dielectric constant. Crystals with non-centrosymmetric structures such as have especially large spontaneous polarisations and so correspondingly large dielectric constants. Conversely, a polar gas tends to have smaller dipoles, and its low density also means there is less to polarise, therefore polar gases have lower dielectric constants than polar solids or liquids. The density argument also applies for non-polar gases when compared with non-polar solids or liquids. Effect of temperature on the dielectric constant For materials that possess permanent dipoles, there is a significant variation of the dielectric constant with temperature. This is due to the effect of heat on orientational polarisation. However, this does not mean that the dielectric constant will increase continually as temperature is lowered. There are several discontinuities in the dielectric constant as temperature changes. First of all, the dielectric constant will change suddenly at phase boundaries. This is because the structure changes in a phase change and, as we have seen above, the dielectric constant is strongly dependent on the structure. Whether κ will increase or decrease at a given phase change depends on the exact two phases involved. There is also a sharp decrease in κ at a temperature some distance below the freezing point. Let us now examine the reason for this. In a crystalline solid, there are only certain orientations permitted by the lattice. To switch between these different orientations, a molecule must overcome a certain energy barrier ΔE. ![Energy barriers in a crystal lattice with no external electric field](images/barrier-nofield.jpg)   When an electric field is applied, the potential energy of orientations aligned with the field is lowered while the energy of orientations aligned against the field is raised. This means that less energy is required to switch to orientations aligned with the field, and more energy required to switch to orientations aligned against the field. ![Energy barriers in a crystal lattice with an external electric field](images/barrier-withfield.jpg) Therefore over time molecules will become aligned with the field. However, they must still overcome an energy barrier in order to do this. If a molecule possesses an energy less than the height of any energy barrier, it cannot cross the energy barrier therefore cannot change its orientation. Hence the orientational mode becomes “frozen out” and can no longer contribute to overall polarisation, leading to a drop in the dielectric constant. These effects are summarised in the graph below. Loss in dielectrics = An efficient dielectric supports a varying charge with minimal dissipation of energy in the form of heat. There are two main forms of loss that may dissipate energy within a dielectric. In conduction loss, a flow of charge through the material causes energy dissipation. Dielectric loss is the dissipation of energy through the movement of charges in an alternating electromagnetic field as polarisation switches direction.Dielectric loss is especially high around the relaxation or resonance frequencies of the polarisation mechanisms as the polarisation lags behind the applied field, causing an interaction between the field and the dielectrics polarisation that results in heating. This is illustrated by the diagram below (recall that the dielectric constant drops as each polarisation mechanism becomes unable to keep up with the switching electric field.) ![Image of graph of dielectric loss against frequency](images/loss.gif) Dielectric loss tends to be higher in materials with higher dielectric constants. This is the downside of using these materials in practical applications. Dielectric loss is utilised to heat food in a microwave oven: the frequency of the microwaves used is close to the relaxation frequency of the orientational polarisation mechanism in water, meaning that any water present absorbs a lot of energy that is then dissipated as heat. The exact frequency used is slightly away from the frequency at which maximum dielectric loss occurs in water to ensure that the microwaves are not all absorbed by the first layer of water they encounter, therefore allowing more even heating of the food. Dielectric breakdown At high electric fields, a material that is normally an electrical insulator may begin to conduct electricity – i.e. it ceases to act as a dielectric. This phenomenon is known as dielectric breakdown. The mechanism behind dielectric breakdown can best be understood using band theory. A detailed explanation of this can be found in the TLP on although not all of this is relevant to the content of this TLP, therefore the aspects of band theory needed to understand dielectric breakdown are presented here. For each material, there is a characteristic field strength needed to cause dielectric breakdown. This is referred to as the breakdown field or dielectric strength. Typically values of the dielectric strength lie in the range 106 – 109 Vm-1. The exact value of the dielectric strength depends on many factors – most obviously the size of the energy gap, but also the geometry and microstructure of the sample and the conditions it is subjected to. The phenomenon of dielectric breakdown is utilised in cigarette lighters and similar devices where a spark must be produced in order to ignite the fuel. The “spark gap” is a small air gap between two electrodes. Charge is built up on the electrodes on either side of the spark gap until the strength of the field across the spark gap exceeds the dielectric strength of air (the mechanism used to create this field is not directly relevant to this TLP, but interested readers may find an explanation of it ). At this point the air within the spark gap becomes capable of conduction, resulting in a spark. Applications of dielectrics = A major use of dielectrics is in fabricating capacitors. These have many uses including storage of energy in the electric field between the plates, filtering out noise from signals as part of a resonant circuit, and supplying a burst of power to another component. The larger the dielectric constant, the more charge the capacitor can store in a given field, therefore ceramics with non-centrosymmetric structures, such as the titanates of group 2 metals, are commonly used.  In practice, the material in a capacitor is in fact often a mixture of several such ceramics. This is due to the variation of the dielectric constant with temperature discussed earlier. It is generally desirable for the capacitance to be relatively independent of temperature; therefore modern capacitors combine several materials with different temperature dependences, resulting in a capacitance that shows only small, approximately linear temperature-related variations. Of course in some cases a low dielectric loss is more important than a high capacitance, and therefore materials with lower values of κ – and correspondingly lower dielectric losses – may be used for these situations. Some applications of dielectrics rely on their electrically insulating properties rather than ability to store charge, so high electrical resistivity and low dielectric loss are the most desirable properties here. The most obvious of these uses is insulation for wires, cables etc., but there are also applications in sensor devices. For example, it is possible to make a type of strain gauge by evaporating a small amount of metal onto the surface of a thin sheet of dielectric material. ![Strain gauge image](images/strain-gauge.jpg) Electrons may travel across the metal by normal conduction, and through the intervening dielectric material by a phenomenon known as quantum tunnelling. A mathematical treatment of this phenomenon is outside the scope of this TLP; simply note that it allows particles to travel between two “permitted” regions that are separated by a “forbidden” region and that the extent to which tunnelling occurs decreases sharply as distance between the permitted regions increases. In this case the permitted regions are the solidified metal droplets, and the forbidden region is the high-resistance dielectric material. If the dielectric material is strained, it will bow causing the distances between the metal islands to change. This has a large impact on the extent to which electrons can tunnel between the islands, and thus a large change in current is observed. Therefore the above device makes an effective strain gauge. Summary = * Dielectrics are electrical insulators that support charge. * The properties of dielectrics are due to polarisation. * There are three main mechanisms by which polarisation arises on the microscopic scale: electronic (distortion of the electron cloud in an atom), ionic (movement of ions) and orientational (rotation of permanent dipoles). * A capacitor is a device that stores charge, usually with the aid of a dielectric material. Its capacitance is defined by Q = C V * The dielectric constant κ indicates the ability of the dielectric to polarise. It can be defined as the ratio of the dielectrics permittivity to the permittivity of a vacuum. * Each of the polarisation mechanisms has a characteristic relaxation or resonance frequency. In an alternating field, at each of these (materials dependent) frequencies, the dielectric constant will sharply drop. * The dielectric constant is also affected by structure, as this affects the ability of the material to polarise. * Polar dielectrics show a decrease in the dielectric constant as temperature increases. * Dielectric loss is the absorption of energy by movement of charges in an alternating field, and is particularly high around the relaxation and resonance frequencies of the polarisation mechanisms. * Sufficiently high electric fields can cause a material to undergo dielectric breakdown and become conducting. Questions = ### Quick questions*You should be able to answer these questions without too much difficulty after studying this TLP. If not, then you should go through it again!*1. A Ca2+ cation and an O2- anion are separated by a distance of 2.4 Å. Calculate the resultant dipole moment. (Charge on an electron = 1.6 × 10-19 C) 2. Consider a capacitor in a computer power supply, possessing a capacitance of 2200 μF. If a voltage of 10 V is applied to this capacitor, what will the charge on the positive plate be? (2 sig figs) 3. In which of the cases below does A have a higher static dielectric constant than B, assuming that both A and B are dielectrics? (note that more than one answer may be correct) | | | | | - | - | - | | | a | A has a higher permittivity than B. | | | b | A is a non-polar gas; B is a polar gas. | | | c | Two capacitors, X and Y, have identical geometry. Capacitor X contains a sample of A and has a capacitance of 200 nF. Capacitor Y contains a sample of B and has a capacitance of 0.6 μF. | | | d | A is a sample of water at a temperature of 20 °C; B is a sample of water at a temperature of 80 °C. | | | e | An electric field of strength 1000 V m-1 passes through both A and B. The field strength is reduced less on passing through A than it is on passing through B. | 4. Under what conditions is the refractive index related to the dielectric constant by κ ≈ *n*2 ? | | | | | - | - | - | | | a | Any dielectric at low frequency, non-polar dielectrics at all frequencies | | | b | Any dielectric at optical frequency, non-polar dielectrics at all frequencies | | | c | Any dielectric at low frequency, polar dielectrics at all frequencies | | | d | Any dielectric at optical frequency, polar dielectrics at all frequencies | 5. A polar liquid is subjected to an alternating current at 50 Hz. The current frequency is then increased to just above the relaxation frequency of the orientational mode of polarisation. Which of these best describes the behaviour of the dielectric constant as the frequency is increased? | | | | | - | - | - | | | a | It remains roughly constant. | | | b | It gradually decreases. | | | c | It remains roughly constant until the relaxation frequency of the orientational mechanism is reached, at which point it drops sharply. | | | d | It gradually decreases until the relaxation frequency of the orientational mechanism is reached, at which point it drops sharply. | 6. And which best describes the behaviour of the dielectric loss as the frequency is increased? | | | | | - | - | - | | | a | It gradually decreases. | | | b | It gradually increases. | | | c | It shows a dip around the relaxation frequency of the orientational mechanism. | | | d | It shows a peak around the relaxation frequency of the orientational mechanism. | 7. You need to make a capacitor that will operate at low electric field strengths and store a large quantity of charge. Energy efficiency does not need to be high (i.e. loss can be tolerated). Which of the following are you most likely to place between the capacitor plates? | | | | | - | - | - | | | a | Nothing - I would use an empty capacitor. | | | b | A ceramic with a non-centrosymmetric structure. | | | c | A polymer with polar chains and an amorphous structure. | | | d | A polymer with polar chains and a crystalline structure. | Going further = ### Website * for the relationship between the dielectric constant and the refractive index. ### Books * *Dielectrics*, P. J. Harrop, 1972 (Butterworths) Contains a more mathematical treatment of dielectrics, as well as information on many other potential applications. * *The Solid State*, Second Edition, H. M. Rosenberg, 1978 (OUP) Chapter 13, “Dielectric properties”, provides a good overview of many of the subjects discussed here and contains the latter part of the derivation for the relationship between the dielectric constant and the refractive index. * *Electronic and Magnetic Behaviour of Materials*, A. Nussbaum, 1967 (Prentice-Hall) pp.70-77 Provides a more detailed look at how the properties of dielectrics arise from their microscopic polarisation.
Aims On completion of this TLP you should: * Understand why the spots on an electron diffraction pattern appear where they do. * Know how to index a diffraction pattern from a sample with a known lattice. Before you start It is strongly recommended that you read through the TLPs on and before reading this TLP. Introduction Electrons can act as waves as well as particles; this is a consequence of quantum mechanics. A series of electrons hitting an object is exactly equivalent to a beam of electron waves hitting the object and it produces a diffraction pattern in the same way as a beam of X-rays does. The two important differences between electron and X-ray diffraction are that (1) *electrons have a much smaller wavelength than X-rays*, and (2) *the sample is very thin in the direction of the electron beam* (of the order of 100 nm or less) - it has to be thin so that enough electrons can get through to form a diffraction pattern without being absorbed. These factors conspire to have a fortunate effect on the Ewald sphere construction (see in the TLP) and diffraction pattern: 1. The thin sample makes the reciprocal lattice points longer in the reciprocal direction corresponding to the real-space dimension in which the sample is thin: ![Diagram of thin sample and reciprocal lattice points](images/img001.gif) It should be noted that there is not necessarily always a particular plane oriented like this. However, it is usual for identification of crystalline phases in a sample to orient the sample so that the electron beam is parallel to a low index lattice direction, as this makes the electron diffraction pattern easier to interpret. 2. The small electron wavelength makes the radius of the Ewald sphere very large (recall its radius is 1/*λ*). The small electron wavelength also makes the diffraction angles *θ* small (1-2°); this can be seen by substituting a wavelength of 2.51 x 10-12 m into the Bragg equation (see in the TLP). These make the Ewald sphere diagram look like this so that whole layers of the reciprocal lattice end up projected onto the film or screen: ![Diagram showing result of thin sample and small wavelength](images/img002.gif) Note that the large (strong) spot in the middle is the straight-through beam (the beam which has passed through the sample without diffracting). This always has the index 000. Caution 1: systematic (kinetic) absences appear in electron diffraction patterns just as in X-ray diffraction patterns, for the same reason: the various features of the lattice or motif diffract electrons in the same direction but the phase factors from the various features cancel, leaving an absence. Caution 2: sometimes where there should be a systematic absence, the spot appears to be still there. This is because of the strong interaction between electrons and atoms: there is a small but significant probability that an electron will be diffracted twice, from two planes one after another - i.e. in two different reciprocal lattice directions one after another. These two directions can add up so that the twice-diffracted electron may arrive at a position in reciprocal space where there is a systematic absence. As an example, the diagram below is a schematic of the [011] electron diffraction pattern of silicon: the 200 type reflections are systematically absent. The intensity at the 200 reflections is caused by double diffraction (arising from the addition of the two reciprocal lattice vectors shown). Thus, in words, intensity can occur in the 200 reflection from, firstly, diffraction from the 111 planes, followed by, secondly, diffraction by the 111 planes as the electron wave passes through the specimen. ![Diagram of the electron diffraction pattern of silicon](images/img003.gif) Mathematics relating the real space to the electron diffraction pattern = Relation 1 The distance, *rhkl*, on the pattern between the spot *hkl* and the spot 000 is related to the interplanar spacing between the *hkl* planes of atoms, *dhkl*, by the following equation: () \[{r\_{hkl}} = \frac{{\lambda L}}{{{d\_{hkl}}}}\] where *L* is the distance between the sample and the film/screen. We can therefore say that *the diffraction pattern is a projection of the reciprocal lattice with projection factor**λ**L*, because reciprocal lattice vectors have length 1/*dhkl*. Relation 2 Since the diffraction pattern is a projection of the reciprocal lattice, *the angle between the lines joining spots h*1*k*1*l*1 *and h*2*k*2*l*2 *to spot 000 is the same as the angle between the reciprocal lattice vectors* [*h*1*k*1*l*1]*\* and* [*h*2*k*2*l*2]*\**. This is also equal to the angle between the (*h*1*k*1*l*1) and (*h*2*k*2*l*2) planes, or equivalently the angle between the normals to the (*h*1*k*1*l*1) and (*h*2*k*2*l*2) planes. This angle is *θ* in the diagram below. ![Diagram of part of reciprocal lattice](images/img005.gif) Using these two relations between the diffraction pattern and the reciprocal lattice, we are now able to index the electron diffraction pattern from a specimen of a known crystal structure. The two pages linked to here refer only to indexing the central region of the diffraction pattern - the rest will be dealt with later. Laue zones So far we have been looking at the central region of the diffraction pattern. This is only a part of the total diffraction pattern. If we look again at the Ewald sphere construction, we have: ![Diagram showing result of thin sample and small wavelength](images/img002.gif) We have been indexing the portion in the middle with the 000 spot in it. However, there are also areas of diffraction spots at the edges of the film, caused by the Ewald sphere intersecting points in an adjacent parallel plane containing reciprocal lattice points. (If the film was small or the camera length large it is possible that it did not catch these spots at the side, so that we sometimes only have the middle part.) These outlying parts of the diffraction pattern are called Higher Order Laue Zones (HOLZs). Each of the HOLZs can be described by an equation of the general form *hu* + *kv* + *lw* = *N* where: * *N* is always an integer, and is called the *order* of the Laue zone. * [*uvw*] is the direction of the incident electron beam. * *hkl* are the co-ordinates of an allowed reflection in the *N*th order Laue zone. ![Diagram](images/img019.gif) The middle part of the diffraction pattern, with 000 in it, is the zero order Laue zone (ZOLZ), because it comes from the plane for which *N* = 0: an allowed reflection *hkl* in the ZOLZ is joined to the origin 000 by a reciprocal lattice vector that lies in the ZOLZ. For the ZOLZ the electron beam [uvw] and the allowed reflection *hkl* satisfy the Weiss zone law *hu* + *kv* + *lw* = 0. The next layer up has a value *N* = 1, then *N*= 2, and so on, as shown. From the geometry of the way in which the Ewald sphere intersects the HOLZs, the radius of the *N*th HOLZ ring, *R*n, in reciprocal space, is given to a very good approximation by the formula \[{R\_n} = \sqrt {\left( {\frac{{2N}}{{\lambda |uvw|}}} \right)} \] assuming that the wavelength of the electrons is much less than the modulus |*uvw*| of the direction [*uvw*] in the crystal parallel to the electron beam direction. Thus, HOLZs are seen more easily at lower voltages (e.g. 100 kV rather than 300 kV) and when the electron beam is parallel to a relatively high index direction in a crystal. It is possible to index the reflections in the HOLZs on a diffraction pattern. Examples of such indexing are given in the book *Transmission Electron Microscopy of Materials* by D B Williams and C B Carter. Kikuchi lines = Kikuchi lines often appear on electron diffraction patterns:An example of a "two-beam" electron diffraction pattern with a number of Kikuchi lines. A pair of Kikuchi lines is arrowed. [The term "two-beam" denotes the fact that the straight-through beam, 000, and one diffraction spot are both diffracting very strongly. The intensity of all spots in this electron diffraction pattern are significantly weaker by comparison with these two beams.] (Click on image to view larger version) We will not learn to index the Kikuchi lines in this TLP. Instead, we will explain their origin and behaviour with the help of the following animation. Kikuchi lines are interesting because of what they do when the crystal is moved in the beam. Diffraction spots fade or become brighter when the crystal is rotated or tilted, but stay in the same places; the Kikuchi lines move across the screen. The difference in behaviour can be explained by the position of the effective source of the electrons that are Bragg-scattered to produce the two phenomena. The diffraction spots are produced directly from the electron beam, which either hits or misses the Bragg angle for each plane; so the spot is either present or absent depending on the orientation of the crystal. The source of the electrons that are Bragg-scattered to give Kikuchi lines is the set of inelastic scattering sites within the crystal. When the crystal is tilted the effective source of these inelastically scattered electrons is moved, but there are always still some electrons hitting a plane at the Bragg angle - they merely emerge at an angle different to the one that they did before the crystal was tilted. Using polycrystalline materials in the TEM Just as with X-rays, a completely isotropic fine-grained polycrystalline sample will give a diffraction pattern of concentric rings in the zero order Laue zone (ZOLZ), as the many small crystals at random orientations produce a continuous angular distribution of hkl spots at distance 1/*d*hkl from the 000 spot - a ring of radius 1/*d*hkl around the 000 spot for each allowed reflection. The rings are then indexed according to the order of allowed reflections within the ZOLZ. As the grain size increases, the rings within the diffraction pattern break up into discontinuous rings containing discrete reflections. If there is any texture (preferred orientation) within the specimen, arcs may be seen instead of complete rings. Convergent beam electron diffraction (CBED) = When a convergent beam is used instead of a parallel beam of electrons, the rays converge to a point within the specimen and come out the other side inverted like a camera. However, we do not look at the inverted image; we look at the diffraction pattern, with the spots magnified: ![Diagram illustrating convergent beam electron diffraction](images/img009.gif) Depending on the camera length chosen, either the zero order Laue zone can be examined or the zero order Laue zone and higher order Laue zones. Two examples of CBED images are shown below. The symmetry seen is such patterns can be related to the space group symmetry of the specimen. Examples of CBED images: | | | | - | - | | | | | Diffraction pattern showing a zero order Laue zone with a mirror in the pattern as shown ( from H.H. Hng, Ph.D. thesis, University of Cambridge, 1999. ) | Diffraction pattern showing a first order Laue zone from H.H. Hng, Ph.D. thesis, University of Cambridge, 1999. ) | Using other methods in conjunction with electron diffraction Electron diffraction is a powerful technique - but other techniques must be used with it to put the results in context. This is a brief synopsis of how other methods can be used to help. X-ray Diffraction - The majority of novel / unknown crystalline materials are indexed by single crystal X-ray diffraction methods. Originally, photographs were taken along crystallographic axes and indexing performed manually. Nowadays, data are collected with a single point or two-dimensional electronic detector and the data are indexed with automatic computer programs. When single crystals cannot be grown, data from a polycrystalline sample may be used to determine unit cells of crystal structures. Simple structures, such as cubic crystal structures, can be indexed manually by looking for integer relationships between interplanar spacings. For more complex structures, such as orthorhombic, monoclinic and triclinic, there are several different types of computer program. However, as there is a substantial loss of information in going from single crystal diffraction data to powder diffraction data, indexing powder diffraction data is demanding. Single crystal X-ray diffraction data are the first choice. Information from both electron and X-ray diffraction are sometimes combined to tackle difficult crystal structures. Hints from other methods maybe also be useful. Indexing X-ray reflections from both single crystals and powders also gives information about the space group of the material under investigation from considerations of symmetry and systematic absences. Measurement of accurate unit cell lattice parameters can also be undertaken - this requires high quality, high angle X-ray diffraction data. *References* [1] B.D. Cullity and S.R. Stock, 'Elements of X-ray diffraction', 3rd editon, Prentice Hall (2001) [2] L.S. Dent Glasser, 'Crystallography and its applications', Van Nostrand Reinhold (1977). [3] International Union of Crystallography website, Optical imaging - This is a very important way of analysing a specimen. Using the naked eye and optical microscopes we can determine down to a point-to-point resolution limited by the wavelength of light how many phases there are and how they relate to one another. We can also infer what type of material they are likely to be and how they may have been processed. Chemical analysis - A wide range of chemical techniques can be used to find out what components are present in the different phases and in what proportions. This will narrow the field of possible elements that we need to consider when analysing our diffraction results. These techniques range from simple chemical tests, through infrared spectroscopy of organic samples, to a wide variety of chemical characterisation techniques that can often be performed within the transmission electron microscope. TEM imaging - Using the TEM to image the same area of sample that is being used to produce the diffraction pattern is an invaluable technique: | | | | - | - | | Nitrided surface layer of austenitic stainless steel (Click on image to view larger version) | Diffraction pattern from nitrided surface layer of austenitic stainless steel (Click on image to view larger version) | Using the image to verify that the double dots in the diffraction pattern are being caused by the two crystal structures either side of the twin boundary, we can index the pattern and determine the twin plane and the crystal structures either side of it. Summary = In this teaching and learning package we have considered how electron diffraction patterns are formed in the transmission electron microscope. The principles of how to index spot electron diffraction patterns have been discussed in some detail. Although we have considered how to index electron diffraction patterns from relatively simple crystal structures to illustrate the basic principles, these principles are generic and can therefore be applied to any crystal structure. We have also considered other features of electron diffraction patterns such as the formation of Kikuchi lines, the formation of convergent beam electron diffraction patterns and the formation of higher index Laue zones. Questions = ### Quick questions*You should be able to answer these questions without too much difficulty after studying this TLP. If not, then you should go through it again!*1. Which of the following is *not* an effect of the small wavelength of the electron? | | | | | - | - | - | | | a | The angle of diffraction is very small | | | b | The reciprocal lattice points are elongated | | | c | The Ewald sphere is very large | | | d | The diffraction pattern is produced from a plane of reciprocal lattice points | 2. Why might you get a diffraction spot where you thought there would be an absence? | | | | | - | - | - | | | a | The atoms moved across a little | | | b | Inelastic scattering | | | c | Some electrons were diffracted two or more times | | | d | There is a fault in the lens | 3. Where do higher order Laue zones come from? | | | | | - | - | - | | | a | The Ewald sphere cutting parts of reciprocal lattice planes other than the one tangent to the sphere | | | b | A different scattering mechanism | | | c | Internal reflection in the microscope | | | d | Small crystals on the edge of the "single" crystal |### Deeper questions*The following questions require some thought and reaching the answer may require you to think beyond the contents of this TLP.*4. If you increase the camera length, what happens to the diffraction pattern? What about the Kikuchi lines? 5. Why would it not be possible to index a zero order Laue zone in a diffraction pattern from a cubic crystal, knowing the two points 111 and 111 ?### Open-ended questions*The following questions are not provided with answers, but intended to provide food for thought and points for further discussion with other students and teachers.*6. We have seen how electron diffraction relates to X-ray diffraction; how do you think neutron diffraction compares to electron diffraction? (Assume that we have a sufficient vacuum to enable the neutrons to reach the sample and be diffracted, and that we have some means of detecting the diffracted neutrons.) Going further = ### Books * *Electron Microscopy of Thin Crystals* by P B Hirsch, A Howie, R B Nicholson, D W Pashley & M J Whelan, published by Krieger Now out of print but explains everything very clearly. * *Transmission electron microscopy of materials* by D B Williams and C B Carter This is a series of four books that contains very detailed descriptions of what happens and how to operate the microscope. Explanations are often in quantum mechanical terms and can be hard going if you want a quick reminder of how something works. * *Electron microscopy and analysis* by P J Goodhew, J Humphreys and R Beanland A clear guide to the principles and phenomena involved in electron microscopy.
Aims On completion of this TLP you should: * Be familiar with one-dimensional and two-dimensional diffraction using a laser. * Be able to relate features of the diffraction pattern to the diffracting object. * Understand the concepts of the back focal plane; and bright and dark field image formation. Before you start Whilst a very basic knowledge of the physics of waves and optics is assumed, this teaching and learning package covers the fundamentals of diffraction and imaging. Introduction The phenomenon of was first documented in 1665 by the Italian Francesco Maria Grimaldi. The use of has only become common in the last few decades. The laser's ability to produce a narrow beam of monochromatic radiation in the visible light range makes it ideal for use in diffraction experiments: the diffracted light forms a clear pattern that is easily measured. As light, or any wave, passes a barrier, the waveform is distorted at the boundary edge. If the wave passes through a gap, more obvious distortion can be seen. As the gap width approaches the wavelength of the wave, the distortion becomes even more obvious. This process is known as diffraction. If the diffracted light is projected onto a screen some distance away, then interference between the light waves create a distinctive pattern (the ) on the screen. The nature of the diffraction pattern depends on the nature of the gap (or mask) which diffracts the original light wave. Diffraction patterns can be calculated by from a function representing the mask. The symmetry of the pattern can reveal useful information on the symmetry of the mask. For a periodic object, the pattern is equivalent to the of the object. In conventional image formation, a lens focuses the diffracted waves into an image. Since the individual sections (spots) of the diffraction pattern each contain information, by forming an image from only particular parts of the diffraction pattern, the resulting image can be used to enhance particular features. This is used in *bright and dark field imaging*. Diffraction patterns 1 Laser diffraction experiments can be conducted using an optical bench, as shown below. Light from the laser (of wavelength *λ*) is diffracted by a mask (usually a small aperture or grating) and projected onto the screen, located at a large distance away, such that applies. The light on the screen is known as the diffraction pattern. Optical Bench (Click on image to view larger version.) The form of the diffraction pattern from a single slit mask, of width w, involves the mathematical “sinc function”, where \[{\rm{sinc}}(z) = \frac{{\sin (z)}}{z}\] The observable pattern projected onto the screen (a distance *L* away) has an intensity pattern as follows, where *x* is the distance from the straight-through position: \[I(x) = {I\_{\rm{o}}}{\rm{sin}}{{\rm{c}}^{\rm{2}}}\left( {\frac{{\pi xw}}{{\lambda L}}} \right)\] ![Schematic diagram of slit and screen with dimensions marked](images/diagram1.gif)Note that sinc(0) = 1. ![Graph showing intensity pattern for a single slit](images/diagram2.gif) Diffraction patterns can be calculated mathematically. The operation that directly predicts the amplitude of the diffraction pattern from the mask is known as a Fourier Transform (provided the conditions for Fraunhofer Diffraction are satisfied). The derivation of some simple patterns can be found . | | | | - | - | | | | | (a) s = 12 *μ*m | (b) s = 3 *μ*m | | Diffraction gratings (Click on image to view larger version) | A diffraction grating is effectively a multitude of equally-spaced slits. The diffraction pattern from a complex mask such as a grating can be constructed from simplier patterns via the . The observed diffraction pattern is composed of repeated "sinc-squared" functions. Their positions from the central spot are determined by *s* (the spacing between slits) and their relative intensity is dependent on *w* (the width of individual slits). ![Diagram of diffraction grating labelled with slit spacing s and slit width w](images/diagram3.gif)Slit spacing s and slit width w ![Graph showing intensity pattern from a diffraction grating](images/diagram4.gif) Diffraction patterns from gratings (a) and (b). (Click on image to view a larger version.) Diffraction patterns 2 By considering diffraction from a grating, the reciprocal nature of the pattern can be derived. This relationship can be seen in the diffraction patterns of the slits: small features of the diffracting object give wide spacings in the diffraction pattern \[s = \frac{{\lambda L}}{x}\]Diffraction patterns from slits of different widths. (Click on image to view a larger version.)More complicated masks, for example a periodic row of apertures, will show more intricate diffraction patterns, but still follow the same basic inverse relationship. ![Diagram of mask consisting of periodic row of apertures and resulting diffraction pattern](images/diagram5.gif) ![Photograph of diffraction grating consisting of periodic row of apertures](images/row-s.jpg) Mask consisting of periodic row of apertures ![](images/row-dp.jpg) Diffraction pattern for periodic row of apertures Two-dimensional diffraction = The resulting diffraction pattern of a complex mask can be predicted by considering the individual diffraction patterns associated with the components that make up the shape of the mask. This can be seen in the diffraction pattern of a row of apertures. If two diffraction gratings are superimposed perpendicularly, they form a two-dimensional periodic array of apertures. ![Diagram illustrating superimposition of diffraction gratings to produce a 2D periodic array of apertures](images/diagram6.gif) The observed diffraction pattern is neither the sum nor the product of the original patterns of the individual gratings, but the separate patterns are repeated to form a two-dimensional array.Two-dimensional diffraction patternThis diffraction pattern of a two-dimensional array of apertures is analogous to the reciprocal lattice of the array, and can be labelled (indexed) as such. Inverse axes are therefore created (where *x*\* is perpendicular to *y*, and *y*\* is perpendicular to *x*). In a reciprocal lattice, the magnitude of the reciprocal lattice vector is inversely proportional to the magnitude of the original vector. This inverse relationship is evident between the pattern and the mask (the x-axis repeat is smaller than in the *y*-axis, whereas the *x*\*-axis repeat is larger than in the *y*\*-axis). See the and TLPs for an explanation of the reciprocal lattice in terms of diffraction. ![Diagram of 2D diffraction pattern](images/diagram7.gif) The angle *γ*\* in the diffraction pattern is complementary to the angle between the grating axes, *γ*. i.e. *γ* + *γ*\* = 180º This can be seen by rotating the gratings with respect to each other. ![Diagram of new mask created by rotating grating 2](images/diagram8.gif) New mask created by rotating grating 2 ![Diagram of diffraction pattern resulting from new mask](images/diagram9.gif) Image formation = When a convex lens is placed between the mask and the screen, the optical bench can form magnified images of the mask onto the screen. Use of a mirror can simply extend the effective screen distance. Note: caution should be taken when using a mirror to reflect laser light. Optical bench set up for image formation (Click on image to view larger version.) The distance between the object and lens (*u*), the distance between the image and lens (*v*) and the focal length of the lens (*f*) are related by the equation \[\frac{1}{u} + \frac{1}{v} = \frac{1}{f}\] ![Diagram of image formation with a convex lens](images/diagram10.gif)Distances involved (Click on image to view larger version.) A lens will focus light from infinity to the 'focal point', at a distance from the lens known as the focal length, *f*. Located at the focal point, is the *back focal plane of the lens* where the diffraction pattern is visible (by using a screen). The diffraction pattern acts as a source of light that propagates to the screen where the image is formed. This theory was first described by Ernst Abbe in 1872. ![Diagram of mask, lens, back focal plane and screen showing image formation](images/diagram11.gif) The diffraction pattern of a mask without a centre of symmetry will still be symmetrical. This can be seen in the mathematics of calculating the pattern. The non-centrosymmetric nature of the mask will however cause non-centrosymmetric variations in the phase. Information obtained from diffraction pattern = The nature of the diffraction pattern (shape, symmetry, dimensions, etc.) is determined by the nature of the mask that diffracts the light. A lens can recombine the (accessible) diffracted light to generate a magnified image of the mask. However, by forming the image from a limited proportion of the pattern, then elements of the mask can be enhanced. A mask containing many different geometrical elements is shown here: | | | | - | - | | | | | The zebra (Click on image to view larger version) | A variable aperture can be placed at the back focal plane. Thus the aperture can be adjusted to limit the region of the diffraction pattern that goes on to form the image. The minimum area of the pattern necessary to form a “full” image of the zebra (with overall shape and stripes visible) contains the undiffracted beam and one of the first diffraction spots. In order to properly resolve the features of the mask, both first order diffracted spots should be included. ![Diagram illustrating minimum required pattern](images/diagram12.gif) If only one diffraction spot is allowed through the back focal plane then no information about the spacing of the slits is passed on to the image and individual slits will not be resolved. Note, however, that each diffraction spot is made up of beams scattered from all parts of the object. Therefore, information about the size and shape of the object as a whole is passed on to the image through a single diffraction spot. If the central (zero order) spot (undiffracted straight-through beam) is solely used, the resulting image is known as the bright field image. If a non-zero order diffraction spot is solely used then a dark field image. | | | | - | - | | Photograph of bright field aperture | Photograph of dark field aperture | | Spot selection for bright field imaging | Spot selection for dark field imaging | Applications of the theory of optical diffraction and imaging = The principles of optical diffraction and image formation are equally valid for other waves: for example, neutrons, electron beams and X-rays. The similarity in the diffraction behaviour means that the theory presented here is applicable to them as well. The symmetry of a diffraction pattern can reveal useful information on the symmetry of the mask. This is exploited in the electron diffraction of crystals, where the pattern can reveal the nature of the crystallographic symmetry, e.g. the periodicity of the structure; the distribution of atoms in the unit cell; and the shape of the crystal. X-ray diffraction patterns are used to measure spacing between layers or rows of atoms, and to determine crystal orientations and structures. Electron diffraction patterns are two-dimensional sections of the reciprocal lattice of the diffracting crystal. X-ray diffraction patterns are simply 3-dimensional extensions of Fraunhofer diffraction. With X-rays, the crystal only diffracts in a few directions. The nature of diffraction from a single slit allows macro-scale measurements to be used to calculate micro-scale dimensions. This has important implications - for example, allowing microscopes resolve to very fine scale (nanometre scale). In optics, the basic shape of the mask is preserved in the bright field image, and some fine detail is lost. In electron diffraction, the contrast of the bright field image is due entirely to thickness and density variations in the sample. A convex glass lens is typically used to focus laser light, but magnetic fields are required to focus electron beams. By selecting individual diffraction spots, dark field images can be used in electron microscopy to distinguish phases (such as characterising two phase intergrowths in crystals). Multi-beam images (composed of various spots, and known as 'high resolution images') are commonly used to study dislocations. Dark field imaging can be used to highlight the dislocation lines, and by tilting the electron beam, the Burgers vector can be determined. These techniques are common in Transmission Electron Microscopy. Summary = The basic features of diffraction and imaging have been presented in this package. When a wave, such as light, passes through a small aperture, it will be distorted. It will form a distinctive pattern on a screen, known as the diffraction pattern. This pattern contains information on the diffracting aperture (such as a mask or grating), with an inverse relationship in dimensions. The form of the intensity pattern can be predicted mathematically. A lens can be used to form an image of the mask onto the screen. The diffraction pattern of the mask can be seen in the back focal plane of the lens. By forming the image from selected portions of the diffraction pattern in the back focal plane, particular information present in the image can be enhanced. The theories involved can be applied to electrons and X-rays, as well as optics. Questions = 1. Which of the masks below could have produced the following diffraction pattern?![Diagram of diffraction pattern](images/diagram34.gif) ![Diagram of apertures](images/diagram35.gif) 2. Increasing the spread of the diffraction pattern allows for more accurate measurement of the spot spacing. Which *one* of these will achieve this? | | | | | - | - | - | | | a | Analysing the pattern in the back focal plane of a lens | | | b | Moving the laser closer to the aperture | | | c | Using a blue light laser instead of a red one | | | d | Moving the screen away from the mask | 3. When the diffraction pattern from a particular grating is projected onto a screen, the even diffraction spots are missing, i.e. the second, fourth, etc. What is the relationship between the width of the slits (*w*) of the grating and the distance between each slit (*s*)? 4. The diffraction pattern from a single slit is projected onto a screen 0.90 m from the slit. Seven spots are visible, with a bright central spot. The maxima of the outer spots are a distance of 15 cm apart. A He-Ne laser is used, which produces light of a wavelength of 0.6328 μm. What is the width of the slit? 5. A convex lens has a focal length of 150 mm. If it is placed 180 mm from a mask on an optical bench, where must the screen be placed in order to focus the diffracted light into a sharp image? 6. As in the previous question, a convex lens has a focal length of 150 mm and is placed 180 mm from a mask on an optical bench. Where must the screen be placed in order to observe the diffraction pattern? 7. As in the previous questions, a convex lens has a focal length of 150 mm and is placed 180 mm from a mask on an optical bench, giving an image distance of 900 mm. What is the magnification of the object in this set-up? 8. As in the previous questions, a convex lens has a focal length of 150 mm and is placed 180 mm from a mask on an optical bench, giving an image distance of 900 mm. If the above lens is 56 mm in diameter, what is the finest grating size that could be resolved theoretically using light of wavelength 0.6328 μm? 9. The theoretical resolution of a microscope is given by![Equation](images/eqn15.gif)where *n* is the refractive index of the medium (*n* = 1 for air) and sin*α* is known as the numerical aperture, N.A. (commonly printed on the side of a lens). If a microscope can just resolve a "400 lines per mm" grating, what would the N.A. of the lens be? 10. A diffraction pattern shows just two-fold symmetry. Which *one* of these apertures could not have produced such a pattern?![Diagram of apertures](images/diagram36.gif) 11. The following heart-shaped aperture produces the adjacent diffraction pattern.![Diagram of aperture and diffraction pattern](images/diagram37.gif)Which of the following masks should be placed in the back focal plane in order to best study the horizontal stripes of the aperture in the image? Dashed lines are shown to identify location of central diffraction spot with respect to mask.![Diagram of masks](images/diagram38.gif) Going further = ### Books * C. Hammond, *The Basics of Crystallography and Diffraction*, Oxford University Press 1997. * R. Steadman, *Crystallography*, Van Nostrand Reinhold, student edition, 1982. * J.S. Blakemore, *Solid State Physics*, Cambridge University Press, 1985. ### Websites * A DoITPoMS teaching and learning package on the application of diffraction principles to X-ray diffraction. * A DoITPoMS teaching and learning package that demonstrates some microscopy techniques involving diffraction. * A MATTER module providing an in-depth look at diffraction, including optical, X-ray and electron diffraction. * A lot of material on diffraction and interference, among many other topics, including interactive Java applets to help visualise the fundamentals of diffraction patterns, such as: + + * Background information on LASERs on the Marshall Brain's HowStuffWorks website.
Aims On completion of this TLP you should: * Understand how diffusion occurs, and what the driving force behind it is. * Understand Ficks laws and what factors will determine the rate at which diffusion occurs. * Understand the effects of microstructure on diffusion. * Be familiar with why diffusion is important to a range of applications. Before you start Diffusion is a fundamental materials science concept, so this TLP is intended to be self-contained. It helps understanding of and Introduction Diffusion is the process by which mass flows from one place to another on an atomic, ionic molecular level. It can also apply to the flow of heat within bodies. We are, perhaps, familiar with the idea of mass transportation within liquids and gases by convection. Atomic motion in fluids is rarely due to diffusion, as convection currents often produce a much greater effect, and are very difficult to avoid. Therefore this TLP will discuss diffusion in solids, and will refer mainly to atomic motion, although in practice ionic motion is common.  When dealing with a solid, diffusion can be thought of as the movement of atoms within the atomic network, by “jumping” from one atomic site. In order for there to be a net flow of atoms from one place to another there must be a driving force; if no driving force exists, atoms will still diffuse, but the overall movement of atoms will be zero, as the of atoms will be the same in every direction. We will study this in more detail in the next few sections of this TLP. Diffusion mechanisms Diffusion can occur by two different mechanisms: interstitial diffusion and substitutional diffusion. Picture an impurity atom in an otherwise perfect structure. The atom can sit either on the lattice itself, substituted for one of the atoms of the bulk material, or if it is small enough, it can sit in an interstice (interstitial). These two positions give rise to the two different diffusion mechanisms. Substitutional Diffusion Substitutional diffusion occurs by the movement of atoms from one atomic site to another. In a perfect lattice, this would require the atoms to “swap places” within the lattice. A straight-forward swapping of atoms would require a great deal of energy, as the swapping atoms would need to physically push other atoms out of the way in order to swap places. In practice, therefore, this is not the mechanism by which substitutional diffusion occurs. Another theoretical substitution mechanism is ring substitution. This involves several atoms in the lattice simultaneously changing places with each other. Like the direct swap method, ring substitution is not observed in practice. The movement of the atoms required is too improbable. Substitutional diffusion occurs only if a vacancy is present. A vacancy is a “missing atom” in the lattice. If a vacancy is present, one of the adjacent atoms can move into the vacancy, creating a vacancy on the site that the atom has just left. In the same way that there is an equal probability of an atom moving into any adjacent atomic site, there is an equal probability that any of the adjacent atoms will move into the vacancy. It is often useful to think of this mechanism as the diffusion of vacancies, rather than the diffusion of atoms. ![Diagram of a vacancy substitution iin diffusion](images/vacancy.png) The diffusion of an atom is therefore dependent upon the presence of a vacancy on an adjacent site, and the rate of diffusion is therefore dependent upon two factors: how easily vacancies can form in the lattice, and how easy it is for an atom to move into a vacancy. The dependence upon the presence of vacancies makes substitutional diffusion slower than interstitial diffusion, which we will look at now. Interstitial Diffusion In this case, the diffusing atom is not on a lattice site but on an interstice. The diffusing atom is free to move to any adjacent interstice, unless it is already occupied. The rate of diffusion is therefore controlled only by the ease with which a diffusing atom can move into an interstice. Theoretically, at very high impurity concentrations movement may be restricted by the presence of atoms in the adjacent interstices. In practice, however, it is very likely that a new phase would be formed before this had an effect. Random walk = If we consider an atom undergoing diffusion, we find that we cannot precisely predict its motion. Where the atom is from time to time is essentially random. If there is a preferred direction of motion (perhaps caused by an electric field) then from time to time the atom is more likely to move in that direction. Similarly, if we consider many atoms evenly distributed undergoing diffusion, then their average motion is zero. If there is a preferred direction of motion, we find that there will be a drift in that direction. This random motion can be modelled using a random walk. One-dimensional random walk - Imagine an atom sitting in an atomic site. The atom will oscillate n times per second, corresponding to a vibration frequency ν. The jump distance, l, is the distance between atomic sites. When left for a time, t, the atom will make a succession of jumps, randomly left or right, and will end up a distance, d, from its starting point. This is known as a “random walk” For a random walk, the mean distance moved, x, is proportional to the **root-mean square** of the number of jumps made, and therefore to the square root of time: \[\overline x = \lambda \sqrt n \] \[\overline x = \lambda \sqrt {\nu t} \] It is significant that we deal with the mean distance travelled. Since the movement of the atom is governed by probabilities, there is a statistical distribution of distances travelled by atoms. For a single atom we cannot know where it will be - only assign probabilities to its potential locations. This simulation shows the random walk of an atom along a one-dimensional lattice. When there is no bias, there is an equal probability that the atom will move in either direction. If a bias is applied the atom has a greater probability of moving to the right, and statistically is likely to drift in this direction with time. Two-dimensional random walk - We can now extend this model to a two-dimensional lattice. In this case the atom can move in one of four directions: left, right, up or down, each with an equal probability. As with the one-dimensional lattice, if a bias is applied, the **root-mean square** distance from the starting point is proportional to the square root of time. Fick's first law We have looked at the mechanisms of on an atomic scale. We now want to examine the emergent properties of these mechanisms when there are a lot of atoms. The first thing to note before we start is that no real material has a perfect structure. There will some amount of vacancies or other imperfections present. Therefore if we add impurity atoms to the material, they will be able to move around the material at some rate. If they are interstitial they will move around at a faster rate since they do not require any vacancies to move. The second thing to note is that if the impurity atoms are distributed evenly throughout the material such that there is no concentration gradient, their random motion will not change the concentration of the material. Nor will there be a net movement of atoms through the material. We are interested in the cases where there is some kind of energy difference in the material, which causes a net movement of atoms. This can be caused by concentration differences, electric fields, chemical potential differences etc. We will first look at the case of concentration differences. The fact that a concentration difference causes diffusion should be familiar to everyone, particularly in the case of liquids and gases. Consider adding a drop of ink to a bowl of water. The ink will diffuse through the water until the concentration is the same everywhere. There is no force causing the ink particles to diffuse through the water. It is in fact a statistical result of the random motion of the particles. We can use Ficks laws to quantitatively examine how the concentrations in a material change. Consider a crystal lattice with a lattice parameter λ, containing a number of impurity atoms. The concentration of impurity atoms, C (atoms m-3) may not be constant over the whole crystal. In this case there will be a concentration gradient across the crystal, which will act as a driving force for the diffusion of the impurity atoms down the concentration gradient (i.e. from the area of high concentration to the area of low concentration). ![Concentration gradient of impurity atoms across the crystal](images/conc_grad.gif) Ficks first law relates this concentration gradient to the flux, J, of atoms within the crystal (that is, the number of atoms passing through unit area in unit time) Ficks first law () is \(J \equiv - D\left\{ {\frac{{\partial C}}{{\partial x}}} \right\}\) D is the diffusivity of the diffusing species. Our equation relating the mean diffusion distance to time can now be modified to be in terms of this parameter: \[\begin{array}{l} \overline x = \lambda \sqrt {\nu t} \\ D = \frac{1}{6}\nu {\lambda ^2}\\ \overline x = \sqrt {6Dt} \\ \overline x \approx \sqrt {Dt} \end{array}\] The animation below demonstrates Ficks 1st Law with respect to a fluid. Fick's second law = Ficks second law is concerned with concentration gradient changes with time. ![Diagram of concentration gradient](images/Fick2aa.gif) By considering Ficks 1st law and the flux through two arbitrary points in the material it is possible to Ficks 2nd law. \[\frac{{\partial C}}{{\partial t}} = D\left\{ {\frac{{{\partial ^2}C}}{{\partial {x^2}}}} \right\}\] This equation can be solved for certain boundary conditions: 1. “Thin source” Consider a semi-infinite bar with a small, fixed amount of solute material diffusing in from one end. ![](images/fick2b.jpg) The amount of solute in the system must remain constant, therefore\(\int\limits\_0^\infty {C\left\{ {x,t} \right\}} {\rm{d}}x = B\), where B is a constant. The initial concentration of solute in the bar is zero, therefore \(C\left\{ {x,t = 0} \right\} = {C\_0}\) These boundary conditions give the following solution: \[C\left\{ {x,t} \right\} = \frac{B}{{\sqrt {\pi Dt} }}\exp \left\{ {\frac{{ - {x^2}}}{{4Dt}}} \right\}\]   2. “Infinite source” A semi-infinite bar with a constant source (i.e. constant concentration) of solute material diffusing in from one end. ![A semi-infinite bar with a constant source (i.e. constant concentration) of solute material diffusing in from one end](images/bar.gif) In this case, the solution is obtained by stacking a series of “thin sources” at one end of the bar, and summing the effects of all of the sources over the whole bar. The initial concentration of solute in the bar is C0, therefore \(C\left\{ {x,t = 0} \right\} = {C\_0}\) The concentration of solute at the end of the bar is a constant, Cs, therefore \(C\left\{ {x = 0,t} \right\} = {C\_s}\) These boundary conditions give the following solution: \[C\left\{ {x,t} \right\} = {C\_s} - ({C\_s} - {C\_0})erf\left\{ {\frac{x}{{2\sqrt {Dt} }}} \right\}\] erf{x} is known as the error function and results from the summation of the thin sources at the end of the bar. It is defined as \[erf\left\{ x \right\} = \frac{2}{{\sqrt \pi }}\int\limits\_0^x {\exp \left\{ { - {u^2}} \right\}} du\] The integral can only be solved numerically with a computer, so erf tables are used to solve the diffusion equation where necessary. This animation shows the applications of Ficks 2nd law and its solutions.   3. Non-Analytical Solutions - For more complicated situations we cannot obtain an analytical solution for Ficks 2nd law. In these cases numerical analysis is used. Solutions obtained in this way are approximations, however, they can be made as precise as needed. The following demonstration shows how numerical analysis can be used to approximate solutions for various conditions. Interdiffusion **Kirkendall Effect** - We will now consider a diffusion couple: that is two semi-infinite bars of materials A and B joined together, such that material diffuses from A to B and from B to A. This is interdiffusion. We can assume that during this process the dimensions of the total bar remain the same, and that no porosity develops. The diffusivity of the two materials will not be the same: We will assume that DB > DA. This would, in theory, lead to a net drift of atoms towards A; the diffusion couple would appear to move to one side as you look at it! We know, however, that this does not happen. In practice there is a lattice drift from left to right, so that the bar remains stationary in the laboratory frame. This lattice drift can be detected by placing inert markers at the interface between the two materials, and observing their drift as the two materials interdiffuse. This is known as the Kirkendall effect. Darken Regime - The Darken equations are Fick's laws adapted for substitutional diffusion, by the motion of vacancies. The equations can be found by equating the flux of atoms and the flux of vacancies within a diffusion couple, and finding the velocity of the lattice drift. This gives the following equations, known as the darken equation: \[{J\_A}' = - \tilde D\left\{ {\frac{{\partial {C\_A}}}{{\partial x}}} \right\}\] Where \({\tilde D}\) is the interdiffusion coefficient, such that \[\tilde D = {X\_A}{D\_B} + {X\_B}{D\_A}\] ![Graph showing Darken regime](images/darken.jpg) This and the expression for the lattice drift velocity below are known as the Darken relations: \[v = \frac{1}{{{C\_0}}}\left( {{D\_A} - {D\_B}} \right)\left\{ {\frac{{\partial {C\_A}}}{{\partial x}}} \right\}\] The derivation for this equation is long, and can be Nernst-Planck Regime The Darken relations only apply if there are sufficient, and efficient, sinks and sources for vacancies, and if diffusion is slow enough, and over large enough distances, that stresses have sufficient time to relax such that no porosity results. If the sinks and sources are not sufficient, or if the diffusion distance is small (such as in a multilayer), stresses build up, and porosity forms. The Darken regime no longer applies, and the system moves into the Nernst-Planck Regime: \[\frac{1}{{\tilde D}} = \frac{{{X\_B}}}{{{D\_A}}} + \frac{{{X\_A}}}{{{D\_B}}}\] It is often useful to relate the Darken regime to series conduction, and the Nernst-Planck regime to parallel conduction. In the Darken regime the diffusivity is controlled by the fastest component; in the Nernst-Planck regime it is limited by the slowest. Microstructural effects =If a material contains grains, the grains will act as diffusion pathways, along which diffusion is faster than in the bulk material. Consider a cylindrical grain of radius *r* and grain boundary thickness δ ![Diagram of cylindrical grain boundary](images/grainboundary.jpg) The area of the grain boundary in the cross section is 2π r δ. Every grain boundary is shared between two grains, so the total grain boundary area associated with *one* grain is π r δ. The ratio of the area of the grain boundary to the bulk is: \[\frac{{\pi r\delta }}{{\pi {r^2}}} = \frac{\delta }{r} = \frac{{2\delta }}{d}\] The overall flux through unit cross-sectional area is the sum the fluxes through the bulk and the grain boundary: \[J = {J\_\rm{b}} + {J\_\rm{gb}}\frac{{2\delta }}{d}\] The derivation for this equation is Therefore: \[{D\_{{\rm{measured}}}} = {D\_{\rm{b}}} + {D\_{{\rm{gb}}}}\frac{{2\delta }}{d}\] It is important to realise that δ is very small, therefore grain boundary diffusion only becomes significant when Dgb >> Db, i.e. at low temperatures. The following animation shows the effect of microstructure on diffusion at various temperatures: Dislocations have a similar effect, providing fast diffusion paths due to the disruption in the lattice, again with only a significant effect at low temperatures. Temperature Effects =Enthalpy of migration - In order for atoms to diffuse they must overcome the energy barrier associated with changing their position. The more kinetic energy the atoms have, the more likely it is that the energy barrier will be overcome. The greater the temperature of the system, the greater the kinetic energy of the atoms, therefore temperature has a significant effect on the diffusivity of the species. The energy needed to overcome the barrier is the enthalpy of migration. For a system with attempt frequency v0 the frequency of successful jumps is given by an Arrhenius dependence: \[v = {v\_0}\exp \left( {\frac{{ - G\*}}{{{k\_B}T}}} \right)\] Where ν0 is the pre-exponential, G\* is the magnitude of the energy barrier (or the free energy associated with diffusion), KB is Boltzmanns constant and *T* is the temperature. Since G\* = H\* - TS\* Where H\* = enthalpy of migration and S\* = entropy of migration \[v = {v\_0}\exp \left( {\frac{{S\*}}{{{k\_B}}}} \right)\exp \left( {\frac{{ - H\*}}{{{k\_B}T}}} \right)\] Or \[D = {D\_0}\exp \left( {\frac{{ - H\*}}{{{k\_B}T}}} \right)\] where \({D\_0} = \frac{1}{6}{\lambda ^2}{v\_0}\exp \left( {\frac{{S\*}}{{{k\_{\rm{B}}}}}} \right)\) which can be assumed to remain constant with varying temperature H\* is often denoted as Q, the activation energy for diffusion. Enthalpy of vacancy formation - As we have discussed before, the diffusion rate in a substitutional lattice is dependent upon the number of vacancies present, which is also temperature dependent. Therefore, in the case of substitutional diffusion there is a further temperature effect to consider. The equilibrium number of vacancies, Xve, also shows an Arrhenius dependence: \[{{\rm{X}}\_{\rm{v}}}^e = \exp \left( {\frac{{ - \Delta {G\_v}}}{{{k\_B}T}}} \right)\] \[{{\rm{X}}\_{\rm{v}}}^e = \exp \left( {\frac{{\Delta {S\_v}}}{{{k\_B}}}} \right)\exp \left( {\frac{{ - \Delta {H\_v}}}{{{k\_B}T}}} \right)\] Where ΔGv = free energy change on formation of vacancies, ΔSV= entropy change on formation of vacancies and ΔHV= enthalpy change on formation of vacancies. In the equation to calculate diffusivity, Q can be separated into two components; * Qm – the enthalpy of migration due to lattice distortions * Qf – the enthalpy of formation of a vacancy in an adjacent site Hence \[D = {D\_0}\exp \left( {\frac{{ - {Q\_{\rm{m}}}}}{{{k\_B}T}}} \right)\exp \left( {\frac{{ - {Q\_f}}}{{{k\_B}T}}} \right)\] Since the formation of a vacant site is not needed for interstitial atoms, Qinterstitial << Qsubstitional, and hence Dinterstitial >> Dsubstitional. Temperature plays a significant role in the rate of diffusion, as it alters the equilibrium concentration of vacancies, and probability of a successful jump into a neighbouring site. The animation below shows the effect of temperature on both substitutional and interstitial diffusion. Applications of Diffusion = Diffusion is a key process in much of materials science. We will examine some applications more closely here: Carburisation - Carburisation is the process by which carbon is diffused into the surface of steel in order to increase its hardness. The carbon forms carbide precipitates (particularly if the steel contains carbide forming elements such as manganese or molybdenum) which pin dislocations and prevent slip, thus making the material harder. However, the increased carbon content reduces the toughness of the material. In most applications it is important that the surface of the steel is hard, but the bulk material can remain softer without detriment to the properties of the component. Thus, carbon is often diffused in from the outer surfaces to obtain a material that is hard on the surface but tough in the bulk. ![Micrograph of A carburised steel, showing increased carbon content on the outer surface](images/carburisation.jpg) A carburised steel, showing increased carbon content on the outer surface See This is done by heating the steel in a carbon atmosphere, so that there is a concentration gradient of carbon across the interface. Carbon diffuses into the steel, and the elevated temperature speeds up the process. The concentration profile of carbon is governed by Ficks second law, as there is effectively an infinite source of carbon. Nuclear Waste - In this case diffusion causes a problem that needs to be solved. Radioactive waste from nuclear energy production must be stored in such a way that the radioactive atoms do not diffuse out of the container until the radioactivity levels have sufficiently dropped. This can be a very long time indeed: often around 1000 years. Thus, the container must be made of a material in which the diffusivity of the atoms is very low (and the container must be very thick, to increase the diffusion distance). This will ensure that the time taken for atoms to diffuse out of the container is as long as possible. Generally, the radioactive atoms are suspended in a glass matrix, such as borosilicate glass. The diffusivity of the atoms in this glass is low, thus the atoms are less likely to diffuse out of the glass before they have ceased to be radioactive. The glass is then sealed inside steel containers and buried deep in the ground under rocks, in remote areas away from populated regions. Semiconductors ![Image of semi-canductors](images/semiconductors.jpg) Gallium-Nitride semiconductor LEDs from the web site Semiconductors can be made by doping one material (often silicon) with a small number of atoms of another material of a different valency. This is known as doping, and means that there is an excess of charge carriers in the material (electrons if the valency of the dopant is greater than that of the silicon, or holes if it is less). For more details on this see the TLP on . The doping is often carried out by diffusion methods: the silicon is placed in a gas of the dopant atoms and heated to high temperatures. The dopant atoms diffuse down the chemical potential gradient into the silicon. As with carburisation, this process follows Ficks second law. In practice, the diffusion process occurs in two steps. After the initial diffusion described above, the atoms will be concentrated mainly on the surface of the silicon. The sample must therefore be annealed in order to “drive in” the atoms, so that they penetrate beyond the surface. Summary = In a random walk, the average distance moved from the starting point is proportional to the square-root of time. \(\overline x = \lambda \sqrt {\nu {\kern 1pt} t} \) Diffusion occurs via two mechanisms, either through the movement of vacancies or interstitial atoms moving between different interstitial sites. Ficks 1st and 2nd Laws can be used to calculate the flux of atoms through a crystal structure under different conditions \[J \equiv - D\left\{ {\frac{{\partial C}}{{\partial x}}} \right\}\;\;{\sf{Fick's}}\;{\sf{1st}}\;{\sf{Law}}\] \[\frac{{\partial C}}{{\partial t}} = D\left\{ {\frac{{{\partial ^2}C}}{{\partial {x^2}}}} \right\}\;\;{\sf{Fick's}}\;{\sf{2nd}}\;{\sf{Law}}\] where D is diffusivity The rate of diffusion varies exponentially with temperature, following the equation; \[D = {D\_0}\exp \left( {\frac{{ - H\*}}{{{k\_B}T}}} \right)\] Diffusivity also increases with the vacancy concentration in substitutional diffusion, which means that it is generally higher along grain boundaries and dislocations due to the disruption to the lattice. Questions = ### Quick questions*You should be able to answer these questions without too much difficulty after studying this TLP. If not, then you should go through it again!*1. In Ficks 1st Law, is *J* (the flux of atoms) proportional to; | | | | | - | - | - | | | a | the square root of time | | | b | the concentration gradient | | | c | the rate of change of the concentration gradient | | | d | the energy barrier *Q* | 2. Which of these diffusion mechanisms is NOT observed? | | | | | - | - | - | | | a | Vacancy | | | b | Ring | | | c | Interstitial | 3. What is Ficks 2nd Law? | | | | | - | - | - | | | a | | | | b | | | | c | | 4. What happens to the diffusivity of a material as you increase the temperature? | | | | | - | - | - | | | a | It decreases linearly | | | b | It increases linearly | | | c | It decreases exponentially | | | d | It increases exponentially | Going further = ### Books * Porter, D.A. and Easterling, K.E., *Phase Transformations in Metals and Alloys*, *Second Edition*, Chapman and Hall, 1992. * Shewmon, P.G., *Transformations in Metals*, McGraw-Hill, 1969. * Reed-Hill, R.E. and Abbaschian,,R., *Physical Metallurgy Principles*, *Thrid edition*, PWS-Kent Publishing Co., Boston, Mass., 1994. ### Websites * Random walk             [] * Random walk             [] * Diffusion                     [] * Kirkendall effect         []
Aims On completion of this TLP you should: * Understand why a dislocation forms and how the dislocation width can be determined using an atomistic model, by minimisation of misfit energy due to in-plane stress and misalignment. * Appreciate how the energy changes as a dislocation moves and how the parameters of atomic spacing, Shear modulus and Poisson ratio affect the total misfit energy. * Be familiar with the concepts of Peierls stress and lattice resistance and compare theoretical and experimental values. Also understand the difference between the linear elastic analytic solution used by Peierls and the atomistic model. Before you start * Look through the TLP on Introduction to Dislocations + * These pages on dislocation width and Peierls Nabarro stress will be particularly useful. + + * This LD shows what happens when dislocations interact to cause work hardening. It also covers the formation of dislocations (Frank Read sources). +   Introduction Plastic deformation of crystals generally proceeds by the propagation of dislocations along the slip planes. In most brittle crystals and some metals, it is the changes in misfit energy as the dislocation moves that are the predominant obstacle to dislocation motion. These arise from atoms in the crystal lattice being displaced from their equilibrium positions, so this effect is known as lattice resistance. Why do dislocations ever form? Deformation can be thought of a kinetic process since dislocation motion (the most common cause of plastic flow) requires the breaking and reforming of bonds, like in a chemical reaction. In this TLP we will be setting up an atomistic model, rather than using continuum elasticity as in the , where it is assumed that in-plane strains do not change as the dislocation moves. It can be shown that despite the general form of the expression it is the same as that derived by Peierls, but with a slightly lower magnitude. The misfit energy is made up of in-plane strains and misalignment strains (across the slip plane). This energy changes as the dislocation moves. The misalignment energy increases as the dislocation width increases, so act to localise the misfit strains. Whereas the in-plane strain energy decreases as dislocation width increases, so acts to spread the misfit strain over a larger region. The final arrangement of atoms results in a minimisation of the overall misfit energy for a given dislocation width. Considering the energy changes as the dislocation moves allows us to calculate the Peierls stress, required to move the dislocation, and see that it is exponentially dependent on the dislocation width. The Peierls stress depends on the atomic spacing both normal to the slip plane and parallel to it – in fact it is extremely sensitive to the ratio of lattice parameters b/d.  Making a dislocation We will consider the changes in atomic positions in a given plane of atoms when an edge dislocation is formed in a simple cubic lattice. If the extension of the dislocation is large compared with the atomic spacing b, the horizontal displacement u varies slowly from atom to atom. The relative displacement of neighbouring atoms within each half crystal is much smaller than b. We start out by considering the initial atomic positions in just two planes **A** and **B**. ![form of a dislocation](images/dislocation_form.jpg) The variation of the displacement u with distance from the dislocation line It is clear from the above figure that a tan function would be consistent with the displacement. Peierls showed this was the case. + The following animation shows how the model is set up using the initial atomic positions in just two planes; here we are making the approximation that the displacements of atoms in the planes on either side (in the y direction) are small enough to be neglected from the overall energy. We are able to show that the final displacement (in the x direction) varies as an arctan function, as demonstrated by Peierls. Join the crystals to form the dislocation = We have determined that the displacements of the atoms are: \[{u\_A}\left( x \right) = - \frac{b}{{2\pi }}ta{n^{ - 1}}\left( {\frac{{{x\_A}}}{w}} \right)\] \[{u\_B}\left( x \right) = - \frac{b}{{2\pi }}ta{n^{ - 1}}\left( {\frac{{{x\_B}}}{w}} \right)\] From below, the displacements of the atoms on the **A**-plane are symmetrical on either side of the dislocation line, and are zero at the centre. It is clear that the misfit around the dislocation is of two types: a strain in the planes above and below the dislocation and a misalignment of the atoms across the slip plane. The process of determining the energies is shown in the following animation: Dislocation width = For a given atomic configuration, we can work out the total energy and then try different values of w until we find the minimum. Practically (see graph below), this can be done by summing the energy over planes from n=-1000 to n=+1000 either side of the “extra plane of atoms”, since the effects of increasing the number of planes beyond this is negligible. As w increases, the decrease in energy associated with localising the in-plane strains (which is why we said a dislocation should never form) is offset by the increase in misalignment energy, which increases with w. There is therefore a minimum in the misfit energy, which gives the dislocation width. The following graph shows the sums of the misfit energies (in-plane, misalignment and total) resulting from the displacements of planes either side of the initial “extra half plane of atoms”. We can determine the dislocation width (w/b) for given parameters by considering the minimum of the total misfit energy curve.  If we look at the value of 0.25G**b**2 for the four materials given, it is similar to the energy value calculated by the model. This shows an approximate agreement with the . What determines w/b? * What we are interested in is the ratio of d/b. As d/b increases, w/b increases. d/b is determined by crystal structure. + **b** is the + d is the spacing of the close packed planes + We also need to take into consideration partial dislocations. For example, copper is ccp so the slip system is {111} <110>. But the dislocations dissociate into two partial dislocations \( \frac{a}{6}\) <112>; hence the magnitude of b will be  \( \frac{a}{{\sqrt 6 }}\; \)  where a is the lattice parameter. + Therefore, the ratio of d/b in copper will be \( \frac{a}{{\sqrt 3 }} \div \frac{a}{{\sqrt 6 }}\; = \sqrt 2 \)* Poisson ratio (ν) affects the in-plane component of the misfit energy. A higher Poisson ratio means the width of the dislocation is greater. What determines the total energy? - * The total energy is affected by the magnitude of G**b**2, where G is the shear modulus and **b** is the Burger's vector. The following clip shows the free surface of a Cd single crystal subject to tensile testing. Slip is occurring on a particular set of planes, and the set of ridges that form on the free surface are created by the arrival of sets of dislocations. Your browser does not support the video tag. Form of the displacement We now need to modify the stationary dislocation model to estimate the change in misfit energy of a dislocation as it moves. The half-plane of atoms moves by much less than the dislocation. The half-plane of atoms was used as the origin, so we think of the dislocations as a moving origin from which we can estimate the displacements. We use the parameter alpha to describe the fraction across the unit cell across which the dislocation has moved.   As shown in the following animation, when the dislocation has moved through b (where α = 1) the extra half plane of atoms is now moved to the plane adjacent to the new half plane. The dislocation has moved about twice as far as the extra half plane. Change in the misfit energy of a dislocation as it moves The dislocation remains fixed (at the origin) and the lattice moves around it, therefore the initial positions of atoms xA and xB are given by \[{x\_A}{\rm{ }} = {\rm{ }}nb{\rm{ }}-\alpha b\] \[{x\_B}{\rm{ }} = {\rm{ }}nb{\rm{ }}-b/2{\rm{ }}-\alpha b\] We use the same method as for the α = 0 case to determine the in-plane displacement and misalignment. As a result, the potentials for different α values can be determined. The following animation (which is a magnified version of the strain energy graph on the previous page) demonstrates how changing the parameter alpha affects the position of the energy minimum and the dislocation width. By calculating the misfit energy for each value of alpha, we can determine the change in misfit energy \(\Delta {U\_T}(\alpha )\) as the dislocation moves. * ΔUT(x) is the difference in total energy from α = x and α = 0. * At first, as the dislocation moves across the unit cell 0<|α|<0.25, ΔUT increases to a maximum. * At the maximum (α = 0.25), the value of ΔUT is the Peierls energy, ΔUP * Then for 0.25<|α|<0.5, ΔUT decreases. At α = 0.5, ΔUT = 0. * w also changes as the dislocation moves across the cell as seen. This is because the configuration of the dislocation changes as the dislocation moves across the unit cell. For the energy maxima, the dislocation width is also a maximum. Peierls energy The maximum change in the total misfit energy is called the Peierls energy \(\Delta {U\_P}\). It is the energy required to move unit length of dislocation over the resistance of the lattice. The following graph shows how the fraction of the unit cell across which the dislocation has moved affects the misfit energy. Adjusting parameters demonstrates how these affect the Peierls energy. The misfit energy associated with in plane strains varies sinusoidally, as does the total energy. Whereas the misfit energy associated with the misalignments is greatest at the position of lowest overall energy. The period is b/2. Hence: \[ \Delta {U\_T}\left( \alpha \right) = \frac{1}{2}\Delta {U\_P}\left( {1 - cos4\pi \alpha } \right)       (1)\] This is shown in the following interactive graph. As can be seen in the magnified strain energy graph, the dislocation width also changes as the dislocation moves, although for simplicity simulations often assume it is constant. The magnitude of the Peierl's energy (and hence the misfit energy) scales with G**b**2. The Peierl's energy varies exponentially with w/b.   \[\frac{{{\rm{\Delta }}{{U}\_{\rm{P}}}}}{{{G}{{\rm{b}}^2}}} ∝ \exp ( - \frac{w}{b})       (2)\] ![dislocation width](images/dislocation_width.png) As can be seen in the misfit energy graph animation, the dislocation width also changes as the dislocation moves, although for simplicity, simulations often assume it is constant. For example, in the energy calculation, we assumed that w/b was a constant for any value of α. What is the Peierls stress? = The Peierls stress is the minimum shear stress required to move a single dislocation of unit length in a perfect crystal. The magnitude of the Peierls stress determines the ability of the lattice to resist dislocation motion – the lattice resistance in the absence of thermal activation. Another way is to think about the resistive force to dislocation motion as the magnitude of the gradient of strain energy curve. This means that the Peierls stress is proportional to the maximum gradient of the misfit energy curve. If we use the continuum model, the energy of the dislocation is independent of position. Any stress, however small, would set the dislocation into motion, because the dislocation is always in neutral equilibrium and so will move under any force. We know this is not right. We need to take into account the crystal structure on an atomic level such that the energy of the dislocation depends on its exact position. The dislocation will have several stable equilibrium positions which persist up until a stress is applied that exceeds a certain magnitude (the Peierls stress), at which point the dislocation can move. Determining the Peierls stress The force required to move the dislocation is \(F = \;\frac{{\delta {\rm{\Delta }}{U\_T}\left( \alpha \right)}}{{\delta \left( {\alpha b} \right)}}           (3)\) Plotting the derivative of the energy-alpha graph gives a plot of the force required to move a dislocation.  The corresponding graph gives the stress required to move the dislocation per unit length of dislocation. \[\tau = \;\frac{1}{{{b^2}}}\frac{{\delta {\rm{\Delta }}{U\_T}\left( \alpha \right)}}{{\delta \left( {\alpha b} \right)}}           (4)\] As we saw in expression (1) on the “” page. \[\;\Delta {U\_T}\left( \alpha \right) = \;\frac{1}{2}\Delta {U\_P}\left( {1 - cos4\pi \alpha } \right)           (5)\] Hence,, differentiating, we get \({\tau \_P} = \)\(\frac{{2\pi }}{{{b^2}}} \)\(\;\Delta {U\_P}\;{\rm{sin}}\left( {4\pi \alpha } \right)           (6)\) The maximum stress is where \(\;\sin \left( {4\pi \alpha } \right) = \;1 \). The expression is often written as \[\frac{{{\tau \_P}}}{G} = \;\frac{{2\pi }}{{G{b^2}}}\;\Delta {U\_P}          (7)\]*A note on this graph: It's important to remember that even though we are multiplying by G**b**2, d/b also affects the dislocation width - increasing d/b increases the dislocation width as seen on the "determining w" graph. So for this graph if we impose a w/b, then we can see the effects of changing the magnitude of G**b**2 by adjusting the parameters G and b/d. Likewise if we impose a b/d we can see the effects of altering w/b through changing something else like the Poisson ratio.* The graph shows the stress calculated using the maximum of the graph (expression 4) and also the differentiated expression for ΔU (equation 7). They are very similar, which is expected. Therefore, using expression (2) from the “” page. \[\frac{{{\tau \_P}}}{G} ∝ \;2\pi \exp ( - \frac{w}{b})          (8)\] The analytical solution provided by the “continuum elasticity” model ( gives the constants as: \[ {\tau \_P} = \;3G\exp ( - 2\pi \frac{w}{b})           (9)\] which is sometimes simplified to \( \frac{G}{{180}}\) if we assume that w = b (which is not in fact true as we have seen!). This solution gives us a theoretical shear stress – i.e. the stress required for uniform slip – but as can be seen from the above graph, it is several orders of magnitude higher than the actual Peierls stress. The analytical solution is from Peierls 1940 and Nabarro 1947. In essence, they assume that the in-plane strain is the surface of a semi-infinite elastic continuum and use that and the sum of misalignment energies to find the width at the equilibrium point. The elastic continuum (i.e. in-plane energy) is assumed to be fixed as is the width; therefore, the changes in energy are due only to the misalignment summation. This gives a wrong answer and indeed the maximum and minimum positions swap (U-tot Vs U-misalignment are out of phase). Since both the in-plane strain energy and the width change as the dislocation is displaced these should be included. Compared with real values, the atomistic model is actually a small underestimate of the Peierls stress, but the analytical solution is a vast overestimate, which renders it is not very useful.   Lattice resistance The higher the Peierls stress the higher the lattice resistance. The model showed an exponential dependence of Peierls stress on w/b (and hence d/b since we found that was the key factor affecting dislocation width).This exponential dependence is very important and is seen in materials experimentally. ![d over b versus tau graph](images/d_over_b_versus_tau.png) That τP can be predicted for both metallic and covalent materials using the atomistic model; there is no obvious effect of bonding. d/b is the important factor. We often think of covalent materials as the "strongest", but this graph shows otherwise. Experimental determination of the Peierls stress is done by considering the yield stress at zero kelvin. It is seen that the materials tend to follow the trend for giant covalent crystals (diamond, TiC); but it is more complicated when there are coulombic forces coming in to play, for example in ionic crystals. It is important to consider what slip system we are using. For example, halides (NaCl, LiF) and MgO have two slip systems – {100} <110> (for which there is good agreement) and {110} <110> (for which the model predicts too high a Peierls stress by ≈102). Uses and limitations of the atomistic model = The atomistic model improves substantially upon the continuum elasticity model in providing an accurate estimate for the Peierls stress.  This is because it takes into account the change in the in-plane strain energy as the dislocation moves, as well as the misalignment energy. However the model assumes that lattice resistance is dominating plastic flow and so cannot be used to predict τY. In materials (especially metals such as Al, Cu, Ni), the yield stress is substantially higher than τP. This is due to interactions with other obstacles and dislocation interactions, which this model does not consider. It is important to remember that we are considering only independent dislocations in this model; the calculations would become much more complicated if we consider dislocations interacting.  The model also does not take into account the anisotropy of crystals; the critical shear stress would depend on some combination of elastic constants, different for each plane. It would also be influenced by the anharmonic forces between atoms, which are here neglected, since the displacements near the dislocation line are large. One application is in the toughening of non-metallic materials. Increasing toughness requires that the stress required for a dislocation to move must be substantially reduced. In this case we wish to decrease the energy associated with the misalignments across the slip plane, which act to reduce the dislocation width, w. These energies scale with the shear modulus.          Summary = There are two types of energy associated with a dislocation * In-plane energy – decreases as dislocation width increases, so acts to spread the misfit strain over a larger region * Misalignment energy – increases as the dislocation width increases, so acts to localise the misfit strains + The dislocation width will be the value for which the sum of the two types of energy is a minimum + w/b is strongly dependent on d/b, where b is the atom spacing parallel to the slip plane and d normal to it. + Changes in misfit energy are the primary obstacle to dislocation motion. + Using the atomistic model with a moving origin allows us to estimate the energy as the dislocation moves, hence we can determine the Peierls energy and the Peierls stress. + Peierls stress increases exponentially as the dislocation width w/b decreases. Questions = ### Quick questions*You should be able to answer these questions without too much difficulty after studying this TLP. If not, then you should go through it again!*1. What is the main cause of lattice resistance? | | | | | - | - | - | | | a | Change in the dislocation width as the dislocation moves | | | b | Changes in misfit energy as the dislocation moves | | | c | Change in the Peierls stress as the dislocation moves | | | d | Dislocations interacting and causing work hardening | 2. What is the main problem with the Peierls continuum elasticity model? | | | | | - | - | - | | | a | It only includes the nearest-neighbour misalignment energy and does not include the next-nearest-neighbour term. | | | b | It assumes the in-plane energy is negligible and can be ignored | | | c | It assumes the total energy remains constant as the dislocation moves | | | d | It assumes that in-plane energy is constant and does not vary as the dislocation moves | 3. How do the misalignment and in-plane energies change with w/b? | | | | | - | - | - | | | a | As w/b increases, both misalignment and in-plane energies increase | | | b | As w/b increase, both misalignment and in-plane energies decreases | | | c | As w/b increases, misalignment energy increases; in-plane energy decreases | | | d | As w/b increases, misalignment energy decreases; in-plane energy increases | 4. What is the relationship between dislocation width and Peierls stress? | | | | | - | - | - | | | a | Peierls stress increases linearly as the dislocation width increases. | | | b | Peierls stress decreases exponentially as the dislocation width increases. | | | c | Peierls stress increases exponentially as the dislocation width increases. | | | d | Peierls stress decreases linearly as the dislocation width increases. | 5. What is the Peierls stress? | | | | | - | - | - | | | a | The minimum stress for which fracture occurs | | | b | The tensile stress at which slip starts to occur | | | c | The minimum shear stress required to move a dislocation across the slip plane | | | d | The minimum stress at which plastic deformation begins |### Deeper questions*The following questions require some thought and reaching the answer may require you to think beyond the contents of this TLP.*6. For what value(s) of alpha (fraction the dislocation has moved across the unit cell) is the mifit energy maximum? | | | | | | - | - | - | - | | Yes | No | a | Alpha = 0 | | Yes | No | b | Alpha = 0.25 | | Yes | No | c | Alpha = 0.5 | | Yes | No | d | Alpha = 0.75 | Going further = ### Books Derek Hull and, D.J.Bacon, *Introduction to Dislocatioins* (*Volume 3 of Materials Science and technology, 5th Edition*), Elsevier, 2011, ISBN: 008096673X, 9780080966731 H.J.Frost and M.F.Ashby, *Deformation-mechanism maps: the plasticity and creep of metals and ceramics*, First Edidtion, Pergamon Press, 1982, ISBN: 0080293379, 9780080293379, Ulrich Messerschmidt, *Dislocation Dynamics During Plastic Deformation*, Springer-Berlin, Heidelburg, 2010, ISBN: 978-3-642-03176-2, 978-3-642-03177-9, ### Paper Howie,P.R., Thompson, R.P., Korte-Kerzel,.S., & Clegg,W.J. (2017), Softening non-metallic crystals by inhomogenenous elasticity. *Scientific Reports*, 7(1), 11602.
Aims On completion of this TLP you should: * understand the nature of dislocations both conceptually and in a real crystal structure * appreciate the motion of a dislocation under an applied stress * be aware of the methods that can be used to reveal dislocations in a crystal structure Before you start Most of this package assumes no prior knowledge. However, some of the more detailed explanations use the following: * Miller index notation for specification of crystal planes and directions * Bravais lattice types It should be possible to make use of the package even without knowledge of these concepts. Introduction The concept of the dislocation was *invented* independently by Orowan, Taylor and Polanyi in 1934 as a way of explaining two key observations about the plastic deformation of crystalline material: * The stress required to plastically deform a crystal is much less than the stress one calculates from considering a defect-free crystal structure * Materials *work-harden*: when a material has been plastically deformed it subsequently requires a greater stress to deform further. Not until 1947 was the existence of dislocations experimentally verified. It took another ten years before electron microscopy techniques were advanced enough to show dislocations moving through a material. The first section of this package deals with dislocations in the simplest way - in two dimensions. Much of the early work on dislocations was done using simple models such as bubble rafts. These can be tremendously instructive, and we use video clips and still pictures to demonstrate how plastic deformation occurs via dislocation motion.A bubble raft. (Click on image to view a larger version.)In real crystals, dislocations are three-dimensional. The structure of a dislocation in 3D can be more difficult to visualise. We look briefly at these structures. In the final part of this package, we consider some of the observations one can make that were first used to verify the presence of dislocations in real crystals. Dislocations in 2D A 'raft' of equally sized bubbles floating on the surface of a liquid is a good large-scale model of a single plane of atoms in a crystal structure. The forces between the bubbles mimic the forces between atoms in a crystal. The bubbles pack to form a close-packed plane. If the raft is made carefully, it is possible to see a variety of structural features in the raft that also occur in real crystal structures, such as *grain boundaries*, *vacancies*, *dislocations* and *solute 'atoms'*. A grain boundary in a 2D lattice is the interface between two regions of crystalline order. Each region or 'grain' has a different orientation with respect to some arbitrary axis perpendicular to the plane of the lattice. ![Diagram showing grain boundaries](images/grainboundaries.gif) Grain boundaries A *vacancy* is a point defect that arises when an atom is 'missing' from the ideal crystal structure. ![Diagram showing a vacancy](images/vacancy.gif) A vacancy A *solute atom* in a crystal structure is an atomic species that is different from the majority of atoms that form the structure. Solute atoms of similar size to those in the host lattice may substitute for host atoms - these are known as *substitutional solutes*. Solute atoms that are much smaller than the host atoms may exist within normally empty regions (*interstices*) in the host lattice, where they are called *interstitial solutes*. ![Diagram showing substitutional solute](images/substitutional-solute.gif) ![Diagram showing interstitial solute](images/interstitial-solute.gif) Substitutional and interstitial solutes. Note that some distortion of the host lattice occurs around the solutes. A dislocation in a 2D close-packed plane can be described as an extra 'half-row' of atoms in the structure. Dislocations can be characterised by the which gives information about the *orientation* and *magnitude* of the dislocation. ![Diagram showing dislocations](images/dislocation.gif) A dislocation Bubble raft = A bubble raft can be made by bubbling air through a soap solution, using a small air pump connected to a hollow needle. The size of the bubbles can be controlled by varying the flow of air through the needle, and by varying the depth below the surface of the liquid that the needle is submerged. Two bars at each end allow forces to be applied to the bubble raft. Creating the bubble raft. (Click on image to view a larger version.) Examine the following still photograph of the bubble raft. The bubbles have been arranged approximately into a single crystal using gentle movement of the bars. The raft shows several defects that are analogous to crystalline defects. Try and identify *vacancies*, *dislocations*, *substitutional solutes* and *interstitial solutes*. Dislocation motion Watch the video clips of the bubble raft undergoing compressive, tensile and shear deformation. It may help you to watch each clip several times. Your browser does not support the video tag. Video of bubble raft undergoing compressive and tensile deformation Your browser does not support the video tag. Video of bubble raft undergoing shear deformationAt small strains, the arrangement of the bubbles does not change. This is elastic deformation of the raft. The bubbles change shape and move slightly apart in an effort to maintain the lowest energy close-packed configuration. At larger stains, plastic deformation occurs. The bubble raft rearranges by *dislocation motion*. Notice how dislocation motion occurs along three directions in the raft. These are the close-packed directions, along which the distance between bubble centres is smallest. ![Photograph of bubble raft annotated to show close-packed directions](images/close-packed-plane.jpg) Close-packed directions in a bubble raft In the different loading conditions, dislocations tend to move mainly along different sets of directions. In each case, the direction along which dislocations generally move is that with the highest resolved shear stress. Dislocations may nucleate near a different type of crystalline defect, such as a grain boundary, solute atom or vacancy. Dislocation glide = Dislocation motion along a crystallographic direction is called *glide* or *slip*. In the bubble raft experiment, dislocations glide when the raft is deformed. There must be a *local shear stress* in an appropriate direction on the dislocation for glide to occur. Dislocation glide allows plastic deformation to occur at a much lower stress than would be required to move a whole plane of atoms past another. These animations compare how plastic shear deformation occurs in a 2D primitive square lattice with and without dislocation glide. Your browser does not support the video tag. Animation of slip by dislocation glide Your browser does not support the video tag. Animation of slip by movement of whole lattice planesThe stress required to cause slip by moving entire planes past one another, and the stress required to cause slip by dislocation motion can be estimated. The shows that the stress required for slip is much lower when the mechanism of slip is dislocation motion, and from this we can conclude that slip *does occur by dislocation motion*. Dislocations in 3D In three dimensions, the nature of a dislocation as a line defect becomes apparent. The *dislocation line* runs along the *core* of the dislocation, where the distortion with respect to the perfect lattice is greatest. There are two types of three-dimensional dislocation. An *edge* dislocation has its Burgers vector perpendicular to the dislocation line. Edge dislocations are easiest to visualise as an extra half-plane of atoms. A *screw* dislocation is more complex - the Burgers vector is parallel to the dislocation line. *Mixed* dislocations also exist, where the Burgers vector is at some acute angle to the dislocation line. In a 2D model such as the bubble raft, *only edge dislocations can exist*. ![Diagram of an edge dislocation showing line and Burgers vectors](images/edge-dislocation.gif) ![Diagram of a screw dislocation showing line and Burgers vectors](images/screw-dislocation.gif) Edge and screw dislocations with line and Burgers vectors shown. Your browser does not support the video tag. Virtual reality model of an edge dislocation Your browser does not support the video tag. Virtual reality model of a screw dislocationWhen a dislocation moves under an applied shear stress: * individual *atoms* move in directions *parallel to the Burgers vector*; * the *dislocation* moves in a direction *perpendicular to the dislocation line*. An edge dislocation therefore moves in the direction of the Burgers vector, whereas a screw dislocation moves in a direction perpendicular to the Burgers vector. The screw dislocation 'unzips' the lattice as it moves through it, creating a 'screw' or helical arrangement of atoms around the core. The ease of dislocation glide is partly determined by the degree of distortion (with respect to the perfect lattice) around the dislocation core. When the distortion is spread over a large area, the dislocation is easy to move. Such dislocations are known as *wide* dislocations, and exist in ductile metals. Observing dislocations A number of ways of 'seeing' dislocations in real materials have developed since the 1950s. Only in the last few years have electron microscopy techniques advanced sufficiently to allow the atomic structure around a dislocation to be resolved. ### Optical microscopy - Etch pits in Sodium Chloride Sodium chloride can be chemically etched to reveal some crystallographic features. Where a dislocation line intersects with a crystal surface, the core of the dislocation etches more rapidly than the surrounding dislocation-free crystal. This results in a small *etch pit*, large enough to be visible under low magnification in the optical microscope. The dislocations themselves are on the atomic scale - orders of magnitude too small to be visible with optical microscopy. Sodium chloride single crystals can be cleaved using a razor blade tapped with a hammer. This creates a 'fresh' surface, with no environmental damage - the surface of the crystal is attacked rapidly by moisture in the air. Sodium chloride cleaves along {100} planes [link to Miller Indices]. Under the optical microscope, *cleavage steps* can be seen.Micrograph of sodium chloride showing cleavage steps. (Click on image to view larger version.) Dropping a few particles of silicon carbide grit onto the surface from a height of about 10 cm causes localised plastic deformation of the surface. This causes dislocations in the crystal to move. The surface can then be etched with iron (III) chloride in glacial acetic acid, and the etchant washed away with acetone. Under the optical microscope at a magnification of around 100x, 'rosettes' of etch pits may be observed, with their centres at the sites of impact of the silicon carbide. Elsewhere, more randomly distributed etch-pits may be present due to pre-existing dislocations and damage during cleaving.Micrograph of sodium chloride showing a rosette. (Click on image to view larger version.) The etch pits around the deformed region are aligned along particular directions, shown schematically below: ![Schematic showing alignment of etch pits around deformed region](images/rosette-schematic.jpg) Schematic showing alignment of etch pits around deformed region. Examining the orientation of the rosettes with respect to the surfaces of the crystal (which will be {100}) can be interpreted in terms of the *slip systems* in sodium chloride - that is, the *direction* of the Burgers vector and the *slip plane* on which the dislocation moves. The key point is that the *dislocations only move on specific crystallographic planes in specific crystallographic directions*.### Transmission electron microscopy Dislocations can be observed in the transmission electron microscope (TEM). Due to the lattice distortion of around the core of the dislocation, some Bragg diffraction of the electron beam occurs in a localised region around the core. Intensity is therefore directed away from the 'straight through' beam, so dislocations appear as dark lines in bright field TEM images. Crystallographic information about the dislocation such as the direction of the Burgers vector can be determined from these TEM images. For some examples, see the . Being able to see dislocations as they move through a structure gives materials scientists a fascinating insight into the mechanisms of plastic deformation. Recently, high resolution TEM has allowed microscopists to actually image the crystal planes and atomic positions within materials. The method can be exceedingly complex, making use of the phase difference between several diffracted beams caused by the atomic structure. In the example micrograph, the atomic positions in an edge dislocation in TiAl can be seen. HRTEM image of a dislocation in TiAl. It is a b = ½ [110] dislocation in taken with the beam down the [101] direction. (Source: Beverly Inkson, PhD Thesis, University of Cambridge, 1994) (Click on image to view larger version.) Scanning tunnelling microscopy (STM) is a high-resolution surface imaging technique. It allows the atomic surface structure to be deduced, revealing the disruption of the lattice at the surface where dislocation lines intersect with it. Summary = In this package we have seen that a dislocation is a defect found in crystals. Dislocations are *line defects*, extending through a crystal for some distance along a *dislocation line*. The *Burgers vector* specifies the magnitude and direction of the atomic movements that occur as the dislocation moves through the lattice. The angle between the line vector and Burgers vector characterises the nature of a dislocation - when the dislocation line and Burgers vector are perpendicular, the dislocation is known as an *edge* dislocation. When they are parallel, the dislocation is a screw dislocation. Between these two ideal angles, the dislocation is *mixed*. A slice through an edge dislocation perpendicular to the dislocation line reveals that the dislocation is like an extra half-plane of atoms inserted between full planes, which are distorted to accommodate the dislocation. ![Diagram of an edge dislocation](images/dislocation.gif) A 2D schematic representation of an edge dislocation in a close-packed plane. The bubble raft experiment shows how dislocations and other defects occur in a close-packed plane. Application of stress to the raft shows how dislocations move under an applied stress, and it can be shown that the stress required to move a dislocation is less than that required to create a similar motion *via* movement of whole planes of atoms simultaneously. Dislocations explain the observation of plastic deformation at lower stress than would be required in a perfect lattice, and the phenomenon of work hardening. Dislocations can be observed by a number of methods. Only high-resolution TEM lattice images or STM surface images can show dislocations directly, but etching methods and optical microscopy can be used to elucidate the presence of dislocations, for example on sodium chloride crystal surfaces. Questions = ### Quick questions*You should be able to answer these questions without too much difficulty after studying this TLP. If not, then you should go through it again!*1. Which of the following statements is *false*? | | | | | - | - | - | | | a | A bubble raft can demonstrate vacancies and solute atom defects. | | | b | Plastic deformation in a bubble raft occurs by dislocation motion. | | | c | Screw dislocations can be seen in the bubble raft. | | | d | Bubbles in a well-formed bubble raft are close-packed. | 2. Which of the following statements best describes the nature of dislocations in an *amorphous material*? | | | | | - | - | - | | | a | Amorphous materials cannot contain dislocations. | | | b | The dislocation density in an amorphous material is normally less than the dislocation density in a crystalline material with the same composition. | | | c | The dislocation density in an amorphous material is normally greater than the dislocation density in a crystalline material with the same composition. | | | d | A dislocation in an amorphous material must be of the *edge* type. | 3. What are the conventional units of dislocation density? | | | | | - | - | - | | | a | m-2 | | | b | m-3 | | | c | kg m-3 | | | d | kg m-2 | 4. How would you make a stack of ham sandwiches look like a screw dislocation?### Deeper questions*The following questions require some thought and reaching the answer may require you to think beyond the contents of this TLP.*5. Do the explanations of these experimental observations involve the concept of dislocations? *(Answer yes or no for each)* | | | | | | - | - | - | - | | Yes | No | a | The yield stress of a material is lower than that calculated from assuming a perfect lattice | | Yes | No | b | In some cases, the stress required to continue plastic deformation increases as deformation proceeds | | Yes | No | c | The yield stress of most metals decreases as the temperature increases | | Yes | No | d | Ceramics tend to be brittle, whereas metals tend to be ductile | 6. The energy per unit length, *U*, associated with an edge dislocation is given by *U* ~ 0.5(*Gb*2)where *b* is the magnitude of the Burgers vector **b** and *G* is the shear modulus. Estimate the energy per unit length of a dislocation in silver.Data for silver: Crystal system is cubic F, *a* = 0.409 nm. **b** lies parallel to <110> directions. Shear modulus *G* = 28.8 GPa. 7. Determine whether the following dislocations in sodium chloride are *edge*, *screw* or *mixed*. Identify the slip plane in which the dislocation lies. (Sodium chloride is cubic F and slips on {110} slip systems.)| | | | | - | - | - | | | Burgers vector ***b*** parallel to: | Line vector ***l*** parallel to: | | a | [110] | [110] | | b | [001] | [110] | | c | [100] | [111] | 8. Which of the following statements is true? *(answer yes or no for each)* | | | | | | - | - | - | - | | Yes | No | a | Using high-resolution microscopy techniques, it is possible to probe the structure of a dislocation. | | Yes | No | b | The number of dislocations in a sample of material affects the ease of diffusion of impurities through the sample. | | Yes | No | c | The diffusion of impurity atoms in a sample of a material affects the ease of motion of dislocations under an applied stress. |### Open-ended questions*The following questions are not provided with answers, but intended to provide food for thought and points for further discussion with other students and teachers.*9. Is dislocation glide always the mechanism of plastic flow? How might other mechanisms operate? 10. Why might the presence of dislocations in materials used for electronic components (such as integrated circuits) be a problem? How might the problem be reduced or solved? 11. Plastically deforming a material requires energy input. In what ways is this energy dissipated? Is any of it stored in the material? Going further = ### Books Many general materials science texts cover introductory material on dislocations. It is worth studying a few books because different authors present the material in different ways, and you may be able to visualise the concepts more easily when presented in a particular way. Also consider * Hull and Bacon, Introduction to Dislocations (4th edition), Pergamon, 2001. A comprehensive text covering material at introductory through to advanced level. * Cahn, The Coming of Materials Science, Pergamon, 2001 A 'history and portrait' of the subject, including the story of the emergence of the dislocation concept. For the interested student, a library search for books by Cottrell, Read, Nabarro or Friedel will produce some classic texts on dislocations. ### CD-ROM and websites The MATTER Project's 'Materials Science on CD-ROM' includes modules on: * Introduction to Crystallography (including Miller Indices etc.) * Introduction to Point Defects * Dislocations See the for details of availability. There is also the MATTER website on . The is a useful introduction to scanning tunnelling microscopy, providing an overview of the instrument and its operation and a gallery of images associated with dislocations.
Aims On completion of this TLP package, you should: * understand the process of electromigration * be aware of how damage is caused by electromigration and the ways in which such damage is minimized * know the reasons for changing from Al-based to Cu-based metallization.   Before you start * It would help to look through the , but this TLP is mostly self-explanatory.   Introduction <! .style4 {color: #0033FF} .style1 {color: #0000FF} .style5 { color: #333333; font-size: small; } >The use of microelectronic devices within our daily lives has increased vastly resulting in the transformation of our lives by recent scientific and technological advances. Electromigration is a principal wear-out mechanism of integrated circuits (IC), thus limiting their reliability. Reliability is important for the success in the microelectronics industry as a product is expected to work for an extended period of time without failure. This makes electromigration an area of intense research as more and more is demanded from microprocessors – to be faster, with smaller components and cheaper. ![](../images/divider400.jpg) #### History An IC contains various interconnected semiconductor components, such as transistors, resistors, capacitors and diodes. The first integrated chip contained tens of transistors within a chip area of 1 cm2, and can now be called “Small-Scale Integration” (SSI). To find out more about the different parts of an integrated circuit, . With time, there has been a reduction of the dimensions of personal computing systems, from the size of a desktop, to a notebook, to a palmtop, to a credit card, to a watch, eventually to the size of a finger ring! Concurrently, there has been a change in volume and complexity of wireless communication system. ![](figures/history_sml.png) Reprinted from ‘Reliability and Failure of Electronic Materials and Devices by Milton Ohring, Copyright 1998, with permission from Elsevier. Currently, a chip area of 1 cm2 or smaller contains hundreds of thousands of transistors to several million – “Very Large-Scale Integration” (VLSI) – making it possible to fabricate a Central Processing Unit (CPU) on a single integrated circuit. The industry now uses the terminology of “Ultra-Large-Scale Integration” (ULSI), for chips containing more than a million transistors and to emphasize chip complexity. This continuous stream of achieving higher levels of integration brings about an exponential growth in the power of computing and communications technology to consumers and businesses worldwide. This is described by Moores Law. **Moores Law states that:** ***The number of transistors on an integrated circuit doubles about every two years.*** Today, adding transistors in pace with Moores Law continues to drive increased functionality, performance and decreased cost of computing and communications technology. The price of an individual transistor has decreased drastically from $45 (in the 1950s) to less than a hundred-thousandth of a cent (currently - 2005)! Click to view a graph depicting the rapid growth in the number of transistors integrated onto a single chip. The dramatic miniaturization has involved not only the semiconductor components of an IC but also the conducting lines linking them (the *interconnects* – part of the *metallization* on a device). Changes relevant for electromigration-induced failure of interconnects include: * line widths shrinking from 10 mm to 0.19 μm, which will become even finer in the future. * total length of interconnect lines on a single IC increasing from several cm to several km (reaching lengths of about 5  km). * current densities increasing from 108 A m–2 to 1010 A m–2. * number of metal levels increasing from 1 to 7 – this is because the number of components has increased so vastly that a single level on the top of the silicon substrate is insufficient to interconnect all the semiconductor components in the substrate, thus requiring multiple levels of metallization lines.   ![](figures/line_width_sml.png) Line widths decreasing from 1 μm to 0.5 μm (for research purposes line widths as small as 0.07 μm have been used). Reprinted from ‘Reliability and Failure of Electronic Materials and Devices by Milton Ohring, Copyright 1998, with permission from Elsevier. These changes all increase the likelihood of electromigration-induced failure of the metallization and they drive further research to reduce the effect of electromigration. The latest development is the move away from aluminium-based to copper-based metallization lines. The theory of electromigration <! .style1 {color: #0000FF} .style2 {color: #0033FF} .style3 { font-family: "Times New Roman", Times, serif; font-style: italic; } .style5 {font-family: "Times New Roman", Times, serif} .style6 {color: #FF0000} >Electromigration is the transport of material in a conductor under the influence of an applied electric field. All conductors are susceptible to electromigration, therefore it is important to consider the effects the electrical current resulting from the applied field may have on the conductor. The net force exerted on a single metal ion in a conductor has two opposing contributions: a *direct force* and *wind force*. ![](../images/divider400.jpg) #### Direct Force Application of an electric field results in an electrostatic pull being exerted on the metal ion core. The direction and magnitude of the electrostatic pull depend on the charge on the ion core, modified by screening effects. Positively charged ion cores (cations) are pulled towards the cathode, whilst negatively charged ion cores (anions) are pulled towards the anode. The direct force, Fd, is given by: Fd = a Z e E = a Z e j ρ where a = factor accounting for screening (a<<1); *Z* = actual valence of the atom; e =  electron charge (1.6 x 10-19 C); j = current density (A m-2); *ρ*= resistivity (Ω m). When electromigration in conductors was discovered, it was thought that the migration effects would be analogous to those in liquid electrolysis. Consider for example, the electrolysis of the molten salt, NaCl. The Na+ ions move towards the cathode, whilst the Cl– ions move towards the anode (as shown in the diagram below), as the electric field is applied. ![](figures/electrolysis_sml.png) It was thought that the metal ions within a wire would always move towards the cathode as in electrolysis. While this is found in some cases, in the metallization of interest for ICs (which are based on good metal conductors) the migration was observed to be in the opposite direction – towards the anode. This startling effect results from the wind force. ![](../images/divider400.jpg) #### Wind Force Electrons move along the metallization line, carrying the current. These electrons tend to scatter. Electron scattering takes place at imperfections within the lattice: vacancies, impurities, grain boundaries, dislocations and even phonon vibrations of the metal ions from their ideal positions! The scattering of electrons gives us electrical resistance but it also results in a force exerted on the metal ion core. An electron changes direction as a result of a scattering event. This change in direction is accompanied by an acceleration, which results in a force. The electrons are also accelerated within the electric field. When the overall electron drift velocity is established, the force on the ions due to electron scattering is in the direction of the electron flow – this is known as the wind force, *Fw*, which can be described as: *Fw* = *–eneλσiE* where *ne* = density of electrons;*σi* = cross-section for collision; *λ* = mean free path. ![](figures/tree.png) The term *electron wind* comes from an analogy with, for example, a tree being blown in the wind. The wind is analogous to the electron current, the leaves on the tree to the metal ions in a conductor. ![](../images/divider400.jpg) #### Net Force In good conductors, such as the ones used in IC metallization, the electron wind force is the dominant force felt by the ion cores, resulting in atomic migration towards the anode. The net force on the ions can be represented as: *Fnet* = *Fwind +Fdirect* = ( *Zw + Zd* )*ejρ* = *Z\*ejρ* where *Zwind, Zdirect and Z\** respectively refer to the effective valences for the wind force, direct force and the net force.![](../images/divider400.jpg) #### Diffusion The migration of the metal ion cores occurs by diffusive jumps. In the cases of interest in metallization, this is self-diffusion or substitutional diffusion by the vacancy mechanism. The rate of diffusion is described by the diffusion coefficient (or diffusivity) with units of m2 s–1. The diffusivity, *D*, has an Arrhenius dependence on temperature according to: *D = D*0exp*(–Q/RT)* where *D*0 = constant (m2 s–1); *Q* = activation energy for diffusion (J mol–1); *R* = gas constant (8.314 J mol–1 K–1); *T* = absolute temperature (K). The diffusion coefficient and the activation energy are dependent on the nature of the material. ![](figures/lattice.png) We depict above a 2-D section through a simple cubic metallic lattice showing a single vacant site. The vacancy can jump into either position 1, 2, 3 or 4 (i.e. it can exchange position with the atom in position 1, 2, 3 or 4). Without any external influence, the probability of the vacancy jumping into each site is equal, as its energy on every site is the same and the activation energy for a jump from site to site is always the same. When an electric field is applied, the activation energies for a jump into sites 1 and 2 remain identical, whereas the activation energy for a jump into site 4 is smaller than that into site 3. This results in a biasing of vacancy diffusion jumps towards the cathode. This net flux of vacancies towards the cathode corresponds to the net flux of atoms (or ions) towards the anode. This flux of metal ions can be considered to be due to the effective charge, *Z\**, on the ion and the associated net force as described above. *J = CDFnet/RT = CDZ\*eρj/RT* where *J* = atomic flux (atoms m2 s–1) ; *C* = atomic concentration (atoms m–3); *D* = diffusivity (m2 s–1). Note that this overall net flux is exceedingly minute. There is only a very slight biasing of atomic movement e.g. in 1000 atoms 499 would diffuse towards the anode whilst 501 diffuse towards the cathode, resulting in a net of 2 atoms migrating towards the cathode. Electromigration is a very slow process, taking time. Electromigration damage =<! .style1 { font-size: small; color: #333333; } >Electromigration is the mass transport in a metallic conductor due to the momentum transfer between conducting electrons and diffusing metal atoms. Uniform electromigration within the metallization lines, if it could be maintained, would not be damaging: in steady-state, no damage should be observed other than at the beginning and end of the metallization line. This is because along the metallization line the number of atoms arriving in a given local volume is equal to the number of atoms leaving the volume, as shown in the diagram below. ![](figures/metallization_line.png) Damage to the metallization lines is caused by *divergences* in atomic flux. When the amounts of matter leaving and entering a given volume are unequal, the associated accumulation or loss of material results in damage. When atomic flux into a region is greater than the flux leaving it, the matter accumulates in the form of a hillock or a whisker. If the flux leaving the region is greater than the flux entering, the depletion of matter ultimately leads to a void. These features are shown in the SEM micrographs below. ![](figures/hillock_small.png) Source of images: (left) Microelectronic Materials by CRM Grovenor, IOP Publishing Ltd, Bristol (UK); (right) Regions of void formation are usually associated with neighbouring regions of material accumulation, as atoms are transported from one region to the other. Stresses develop within the metallization line as a result of the mass transport. ![](figures/stresses.png) Source of image: O. Kraft, J.E. Sanchez Jr., M. Bauer, E. Arzt: Quantitative analysis of electromigration damage in Al-based conductor lines. J. Mat. Res. 12 (1997) p.2027-2037. A stress gradient builds up within the metallic line and opposes the electromigration force. The formation of voids and hillocks partially relieves these stresses. A void forms to relieve tensile stresses, whilst hillock growth relieves compressive stresses. The growth of voids and hillocks can be viewed in the videos below: Your browser does not support the video tag. Video showing hillock growth (wide metallization lines) Your browser does not support the video tag. Video showing void growth and migration (wide metallization lines) Your browser does not support the video tag. Video showing void growth (modern, narrow metallization lines)Voids and hillocks are detrimental to the metallization lines because a growing hillock could come into contact with other metallization lines resulting in a short circuit, an unintended pathway for the electricity to flow. As a void grows, the effective cross-sectional area of the metallization line decreases. This results in both and current density within the system. The void ultimately leads to an open circuit when no material bridges across it. These effects disrupt the intended functioning of the integrated circuit and result in failure. The nature of a microprocessor chip makes repair impossible and the failed chip has to be replaced.   Flux divergence (I) =<! .style1 { font-family: "Times New Roman", Times, serif; font-style: italic; } .style3 {font-family: "Times New Roman", Times, serif} .style4 {color: #0033FF} .style5 {color: #0000FF} .style6 { color: #333333; font-size: small; } > Divergences in atomic flux, J, within the metallization line result in damage. The origins of non-steady-state mass transport (i.e. transport such that the concentration of atoms, *C* changes locally) can be understood in terms of the equation of continuity; for example, in one dimension: \[\frac{{\partial C}}{{\partial t}} = - {\left. {\frac{\partial }{{\partial x}}\left\{ {\frac{{CDZ\*e\rho j}}{{RT}}} \right\}} \right|\_{T = {\rm{constant}}}} - {\left. {\frac{\partial }{{\partial T}}\left\{ {\frac{{CDZ\*e\rho j}}{{RT}}} \right\}} \right|\_{x = {\rm{constant}}}}\frac{{\partial T}}{{\partial x}}\] | | | | | | | - | - | - | - | - | | | from local structural property gradients at constant T | | from T gradients and thermally dependent properties | |Therefore when: *∂C*/*∂**t* < 0:        mass depletion occurs and voids form (positive flux divergence) *∂C*/*∂**t* > 0:        mass accumulates in growths – hillocks and whiskers (negative flux divergence) *∂C*/*∂t* = 0:        there is no change in atomic concentration and no damage occurs. All the metal parts through which current flows in integrated circuits are potentially susceptible to electromigration effects. Different aspects are important in interconnects, contacts and vias. As an added complication, the system observed is never static. Interactions within the metallization line result in the: * movement of grain boundaries (grain growth) * induction of heating * recrystallization of the grain structure * subtle evolution of microstructure and chemical composition. The main contributions to flux divergence within metallization lines are: variations in, or. ![](../images/divider400.jpg) #### Variation in Microstructure Grain boundaries within Al-based metallization lines act as fast diffusion paths compared to the bulk system. The atomic environment at a boundary is less confining and contains fewer obstacles to diffusion. Accordingly, the activation energy for grain boundary diffusion is lower than that for bulk diffusion. As diffusion at grain boundaries far exceeds transport through the bulk of grains, the overall rate of atomic transport is greatly affected by the grain size, which determines the area of grain boundary in a given volume of sample. The grain structure within metallization lines can vary from place to place: for example, the deposition technique would affect the degree of variation of microstructure. The associated property variations also lead to electromigration damage. Differences in microstructure or properties can be very large or barely perceptible, involving variations in grain orientation, grain size, chemical composition, atomic diffusivity, effective valency and vacancy generation within the grain structure. Electromigration-induced damage is profoundly affected by grain structure: for example single-crystal aluminium stripes exhibit “infinite” life. This is because atomic diffusion and drift, for a metallization based on Al, are dominated by transport along grain boundaries rather than transport through the bulk of grains. Some examples of microstructural variations are:- * **Triple points** A film with a uniform grain size has a network of grain boundaries meeting at triple points (strictly, triple *lines* in 3-D). In annealed thin films it is usual for the grain structure to be in a 2-D pattern, with each grain occupying the full thickness of the film. ![](figures/triple_point_sml.png) For a common metallization, such as Al-Cu, where the migrating atoms are solely confined to grain boundaries, it is possible to get divergences in mass transport at the triple points found in interconnect and contact lines. The atomic diffusivity in a given boundary can vary widely, depending on the structure of the boundary, which is related to the crystallographic misorientation of the grains. But even if we assume that all the grain boundaries are high-angle boundaries with similar diffusivities, it can easily be understood that flux divergences must arise at triple points. When the direction of migration is such that one grain boundary leads into and two boundaries lead away from a triple point, the negative flux divergence can lead to the formation of a void. The converse is true when two boundaries lead into a triple point and one leaves, giving mass accumulation and possible hillock formation. ![](figures/triple_point_flux_sml.png) It would be assumed that a metallization line consisting of numerous triple points would be full of defects. This is not the case, as described by the concept known as the . Often, no damage is observed at triple points because they are closer than this critical length for damage e.g. for a current density of 1 x 109 A m2, a line length of several hundred micrometers will not be susceptible to electromigration. A longer metallization line would have a larger number of triple points and hence an increased likelihood of this kind of failure. However, other factors come into effect as metallization line length increases, thus resulting in yet further damage. * **Differences in grain size** A fine-grained region contains more grain boundaries for atomic migration, than a coarse-grained region. Accumulation of atoms therefore occurs when atomic migration is from a fine-grained region moving into a coarser grained region. Conversely, voiding occurs when the migration is from a coarse to a finer grain size. ![](figures/variation_in_grain_size_sml.png) To avoid damage-inducing mass divergences, a bamboo-like grain structure (as shown in the SEM micrograph below) is desirable. The lack of continuous grain boundary paths for diffusion results in negligible mass transport along grain boundaries. The driving force for electromigration is predominantly perpendicular to the grain boundaries. ![](figures/bamboo_structure.png) SEM micrograph of a metallization line containing a bamboo structure - grain boundaries are perpendicular to electron flow. (*Reprinted with permission from: A.G. Domenicucci et.al:  Effect of copper on the microstructure and electromigration lifetime Ti-AlCu-Ti fine lines in the presence of tungsten diffusion barriers. J. Appl. Phys. 80, p4952. Copyright 1998. American Institute of Physics.*) The animation below shows bamboo structure and its resistance to EM-damage. Flux divergence (II) <! .style4 {color: #0033FF} .style5 {color: #0000FF} .style6 {color: #0000FF; font-weight: bold; } .style7 { color: #333333; font-style: italic; font-size: small; } .style8 { font-family: "Times New Roman", Times, serif; font-style: italic; } .style9 {font-family: "Times New Roman", Times, serif} .style10 { color: #333333; font-size: small; } > #### Variations in Material Different materials are used within an integrated circuit: for example, the Si substrate, W vias and Al interconnects. Differences in diffusion rates between two materials result in atomic flux divergence at the interface between them. If the current flows from a material with a higher diffusivity to one with a lower diffusivity, the interface between the materials is a region of mass accumulation. Conversely when current flows from material with a lower diffusivity to another material with a higher diffusivity, void formation takes place at the interface. Electromigration-induced damage is most evident where the change in diffusivity is very large, e.g. at a Cu/barrier interface or an Al line/ W via interface. Two main examples of this can be seen in: *contacts* and *vias*. * Contacts When current flows through the contacts, electromigration causes metal atoms to move away from and towards the semiconductor interface. The inability to replenish metal that has been removed from and the inability to remove metal brought to the semiconductor interface results in a region of mass divergence. At the contact window either accumulation or depletion of Al occurs, depending on the direction of current flow. ![](figures/contacts_sml.png) * Vias The IC architecture requires use of successive interconnect levels and therefore of vias that enable current to flow between the layers. For Al-based metallization, W plug vias are commonly employed due to their high reliability, though structures consisting of Al-Cu vias can be used but are more difficult to make. As Al metallization tends to migrate whilst W exhibits negligible atomic transport (as shown in values of self-diffusion below), via-interconnect interfaces become sites of potentially large mass divergences. The bulk lattice diffusivity values of Al and W at 300 K (assumed service temperature of the IC) are\*: DAl, bulk = 3.2×10-29 m2 s-1 0 < *D*W, bulk < 5×10-88 m2 s-1 (negligible) \**Values calculated from , 6th Ed., Mechanisms of Diffusion p.13-11; E.A. Brandes (editor), Butterworths, London (UK), 1983.* ![](figures/void_formation_sml.png) Source of SEM image of void formation at a via; R. Rosenberg et. al.: Copper metallization for high performance silicon technology, Annu. Rev. Mater. Sci. 20 p.229, 2000. Degradation at vias is dependent not only on composition and grain structure, but also on the direction of current flow. The two modes of electromigration damage are shown above (in a schematic form and from an SEM micrograph). Voids form where the electrons flow away from the via, while a hillock forms when electrons flow towards the via. In addition, thinned metal conductors and corners induce excessive and current crowding that lead to accelerated degradation. ![](../images/divider400.jpg) #### Variations in Temperature Temperature differences along the metallization line cause flux divergence because the diffusion coefficient is dependent on temperature. At higher temperatures, diffusion rates are increased. If there is a variation of temperature along the metal line, regions of accumulation and void growth would develop over time. Although the silicon substrate acts as a very good heat sink, the temperature along a line can vary because of heat generation in the underlying semiconductor components and because of local heating due to in the metallization itself. Avoiding electromigration problems <! .style1 {font-family: "Times New Roman", Times, serif} >At the time of publication of this TLP, in the International Technology Roadmap for Semiconductors, the top three challenges facing the industry were given as: * Problems with integration and material characterization arising from rapid introduction of new materials and processes, which are necessary to meet conductivity requirements and reduce dielectric permittivity. * Engineering interconnect structures which can be manufactured and are compatible with new materials, because there is a lack of interconnect/packaging architecture design optimization tools to include: integration complexity, chemical mechanical polishing (CMP) damage, resist poisoning, and dielectric constant degradation. * Achieving necessary reliability by using new materials, structures and processes; and the need for detection, testing, modelling and control of failure mechanisms. Failure mechanisms, such as electromigration, create reliability problems. Therefore, the need for reliability within ICs drives research to make further development in avoiding electromigration failure. This is a huge challenge to the microelectronics industry as everything tends towards smaller and smaller components. There are several factors affecting the lifespan of interconnects. These can be divided into two classifications: *material and processing*; and *external conditions*. | | | | - | - | | Material and Processing | External Conditions | | * composition of the metal alloy * crystallographic orientation of the grains within the metal * dimensions and shape of the conductor * procedures of layer deposition * types of heat-treatment and annealing * characteristics of passivation | * interface with other materials * time dependency and type of current – direct or alternating current forms * current density * external heating effects | Electromigration failures take time to develop and the early stages are difficult to detect. As electromigration damage is cumulative, it is best to prevent damage from occurring during the lifetime of the device. Careful design helps to prevent electromigration-induced damage. This includes: ensuring that current densities in all parts of the circuit are limited, so as to have sufficient current to run the device yet minimizing any potential electromigration damage; choosing metallization compositions to limit electromigration degradation; and making good selections for passivating thin films placed over metal lines to prevent extrusions caused by electromigration. An example of this is the move towards changing to . Grain structure of metallization lines can also be optimized (see ) When designing a device, the median lifetime is often estimated using **Blacks Law**: \[{t\_{50}} = c{j^{ - n}}{e^{\frac{{{E\_e}}}{{kT}}}}\] where *t*50 =  median time to failure of metal lines subjected to electromigration (hrs); *c* = constant based on the metal line properties (units depend on exponent *n*); *j* = current density (A m-2); *n* = value between 1 and 7 (though commonly 2); *Ee* = activation energy (J) [within the range 0.5–0.7 for Al]; *k* = Boltzmann constant (1.38×10-23 J K-1); and *T* = temperature (K). Blacks law is an empirically found relationship. This relationship is found by performing experiments at higher current densities and temperatures, to speed up the time to failure. The data is then extrapolated down to the service conditions for the current density and temperature of the IC. Optimising the median lifetime is difficult, as exact and verified values are required for the current densities in order to make a reasonable estimation. It is challenging to obtain the data required. It is also uncertain if such an extrapolation of data is possible for a suitable estimation. Therefore, it is often that an attempt can only be made to limit the current density, in order to give the desired lifetime. An alternative is to make use of other materials and their properties. Currently, refractory or barrier layers e.g. TiN, TiW, Cr2O3 are used to carry the current after an open circuit results from void formation, thus giving the device a longer lifetime before failure.   Alternatives to aluminium metallization =<! .style5 {color: #0000FF} .style6 {color: #0033FF} .style7 { color: #333333; font-size: small; } .style8 { font-family: "Times New Roman", Times, serif; font-style: italic; } .style2 {font-family: "Times New Roman", Times, serif} .style9 { color: #333333; font-style: italic; font-size: small; } >Technological pressures on the speed and reliability of integrated circuits has caused a need for changes to be made in the choices of materials used for metallization lines today. In selecting a material for metallization, it is necessary to consider what its desirable properties would be. These include: * low resistivity * mechanical stability, good adherence, and low stress * easy to form * easy to etch for pattern generation * stable throughout processing, including: high temperature sinter, dry or wet oxidation, gettering, phosphorus glass (or any other materials) passivation, and metallization * should not contaminate devices, wafers or working apparatus * should be stable in oxidation ambients and is able to form a stable oxide * no reaction with other components * good surface smoothness * low contact resistance, minimal junction penetration and low electromigration for use in window contacts * very low cost During the growth of the microelectronics industry, metallization lines have moved from the use of *pure Al* to *Al-Cu alloy* and currently to *pure Cu*. ![](../images/divider400.jpg) #### Using Pure Aluminium Commercially, it would be assumed that a material having the lowest resistivity would be used for IC interconnections. This is because, the lower the resistivity of the material, the smaller the resultant RC-delay, and the faster the signal transmission between devices – in order to be useful, resistivities must be below 50 μΩ cm. | | | | - | - | | Metal | Electrical resistivity (at 20 ºC) / µΩ cm | | Al Cu Au Ag W | 2.65 1.67 2.35 1.59 5.65 | Data obtained from Metals Handbook, 8th Edition, ASM, Metals Park, Ohio. Yet, the standard interconnection material previously used was Al, despite having other materials such as Ag, Cu and Au with lower resistivity values (as shown in the table). This is because it was found that: * Ag is too prone to attack by S and O, and could not maintain its low resistivity over the device lifetime. * Cu is prone to oxidation resulting in a vast increase in resistivity of the material. * Ag and Au are very difficult to deposit as very low resistance films. Al has low resistivity and can be easily deposited. It can be dry etched, does not contaminate Si as it forms a protective Al2O3 oxide layer which prevents further oxidation, has excellent adhesion to dielectrics, and is able to form ohmic contacts. ![](figures/Al_lattice.png) These factors resulted in the universal use of Al-based metallization in ICs. Unfortunately, Al has a problem forming contacts with shallow junctions, has difficulty achieving a good mechanical and electrical connection as a vacuum deposited thin film and is very prone to electromigration failure. ![](../images/divider400.jpg) #### Using Aluminium-Copper Alloys Due to problems associated with electromigration in pure Al metallization, there was a real need to improve the metallization. In the mid 1960s, a mis-focused electron beam evaporated some Cu, instead of the charge metal (Al). The resultant Al-Cu alloy films were found to be significantly resistant to electromigration. ![](figures/AlCu.png) This is shown in the table below, as the amount of Cu content increases the critical product (*jl*)*c* is increased and the effective charge, Z\*, is decreased. | | | | | | - | - | - | - | |   | Cu content (wt%) | Critical product (*jl*)*c*/ A cm-1 | Effective charge number, Z\* | | Al standard AlSiCu cold AlSiCu hot AlCu | 0 0.5 0.5 2 | 244 211 298 833 | 18 13 13 5 | Data from: ‘Quantitative analysis of electromigration damage in Al-based conductor lines by O. Kraft et.al. J. Mater. Res., 12 (1997) 2027. #### Reducing the effect of electromigration A commonly used Al-Cu alloy is Al-1.5at%Si-4at%Cu. Despite its increase in resistivity from 2.86 μΩ cm (pure Al) to 3 μΩ (Al-Cu alloy), the improvement of electromigration resistance made Al-Cu alloys the system of choice in metallization. There are several reasons for the increase in electromigration resistance: * As most mass transport in Al occurs along grain boundaries, the addition of Si and Cu reduced the grain boundary diffusion rate, by increasing the activation energy of grain-boundary diffusion. * The addition of Cu to the Al alloy, results in the formation of CuAl2 precipitates. These precipitates form primarily on grain boundaries, impeding grain boundary diffusion. The precipitates also act as reservoirs of Cu, delaying any damage, as the precipitate must dissolve before the level of Cu in grain boundaries can fall significantly. Dissolution is rather slow as a steady-state flow of Cu can be established from precipitate to precipitate before migration takes place. ![](figures/atomic_motion.png) ![](../images/divider400.jpg) #### Using Pure Copper The trend towards narrower interconnections, and faster and more reliable devices resulted in the examination of the possibility of using Cu-based metallization. It was found that the change to a Cu-based material greatly improved the reliability of devices. ![](figures/Cu_lattice.png) A comparison (see table below) shows the reasons for the currently increasing use of Cu-based metallization in the microelectronics industry: | | | | - | - | | **Advantages of Cu** | **Disadvantages of Cu** | | * Cu is more conductive than Al, thus allowing finer metallization with lower resistive losses (ρCu = 1.67 μΩ cm, ρAl = 1.65 μΩ cm) * Atomic migration in Al occurs along grain boundaries and surfaces. There is little or no bulk transport in Cu. (Bulk self diffusivity of Al: 1.9×10-12 m2 s-1, Cu: 1.8×10-16 m2 s-1 at 933 K) * Al is very susceptible to electromigration, (get rapid formation of hillocks and voids), whilst Cu is less vulnerable as it has higher mass and a higher melting point. * Cu is also less likely to fracture under stress. | * Cu diffuses rapidly into Si and SiO2, causing deep-level defects as it contaminates the Si. * The main transport path in Cu is the top surface of metallization lines. This results in some electromigration damage. Al-based metallization does not exhibit this, as Al forms a protective oxide layer preventing surface transport. * As Cu cannot be dry etched, it was necessary to develop an electroplating process for making copper networks, the dual-damascene chemical-mechanical polishing (CMP) process, and an effective linear material for use as a copper diffusion barrier and to promote adhesion. | Due to the favourable properties of Cu, it is possible for the chip size to be reduced, whilst increasing the speed and complexity of the device. Cu has proved to be an excellent metallization material as it has an improved current carrying capability and high electromigration resistance. The disadvantages have been overcome using new thin-film technology and careful materials selection. It is therefore possible for component size to be further reduced, increasing the speed and complexity of the device. This is sufficient for now, though there is a continuing need to achieve high conductivity and minuscule dielectric constants for future devices. Therefore, in order for the microelectronics industry to keep up with and the ever increasing consumer needs, there needs to be introduction of new materials and processes. Summary =<! .style1 {font-family: "Times New Roman", Times, serif} >This TLP has covered several points:- * The reliability of microelectronic devices is important, as our lifestyles depend on the current technology and we expect continuing improvements in performance. Electromigration reduces device reliability, which is a problem for the electronic industry. * Electromigration is the transport of material in a conductor under the influence of an applied electric field. * Electromigration phenomena occur in all conductors on application of an electric field. Damage is commonly observed within narrow metallization lines in integrated circuits, as they are exposed to particularly high current densities. * The electromigration force, which causes atomic migration, is a combination of two forces – the direct force and the wind force, related by: Fnet = Fwind + Fdirect = Z\*ejρ where **Z\** = effective charge; *j* = current density*(A m-2)*;* and **ρ** = resistivity (Ω m)*.* Mass atomic transport towards the cathode occurs as the net force biases the net diffusion direction. * Damage induced by electromigration results in hillock and void formation. Electromigration-induced damage is caused by divergences in atomic flux, themselves caused by place-to-place variation in: + Microstructure + Material + Temperature * Choosing the best materials for use in an integrated circuit, the most suitable processing methods and good device design can assist in reducing the resultant electromigration damage. * The device lifetime can be roughly estimated using Blacks Law to extrapolate median time to failure at service temperature from accelerated test conditions: \[{t\_{50}} = c{j^{ - n}}{e^{\frac{{{E\_e}}}{{kT}}}}\] where *t*50 =  median time to failure of metal lines subjected to electromigration (hrs); *c* = constant based on the metal line properties (units depend on exponent *n*); *j* = current density (A m-2); *n* = value between 1 and 7 (though commonly 2); *Ea* = activation energy (J) [within the range 0.5–0.7 for Al]; *k* = Boltzmann constant (1.38×10-23 J K-1); and *T* = temperature (K). + Due to the increased miniaturization of microelectronic devices, there is a need for new materials and processes to be used for metallization lines. One such development is the change from Al-based to Cu-based metallization in high performance devices. Further speed, reliability and miniaturization requirements are spurring on the search for the use of new materials within integrated circuits. Questions = ### Quick questions*You should be able to answer these questions without too much difficulty after studying this TLP. If not, then you should go through it again!*1. Which force/forces act on metal ions within a conductor, during electromigration – under the application of an electric field? (You may pick more than one answer)(a) Wind force, *F = aneλdσijρ* (b) Direct force, *F = aZejρ* (c) Mechanical back force, *F = ΩΔσ/Δx* (d) Moment force, *F =* Moment/distance from pivot (e) None of the above 2. The median life of pure Al stripes of width, *w* (in cm) and thickness, *d* (in cm) at 50 °C is given by: *t*50 = 4.4×1012*wdj*–*n* exp(*Ee*/*kT*) where *n* = 2, *Ee* = 0.49 eV, *j* = 1 x 105 A cm-2, *w* = 0.4 μm and *d* = 0.5 μm. What is  the median life of the interconnect? (a) 0.38 hrs (b) 3.83 hrs (c) 38.3 hrs (d) 380 hrs 3. How does the median life of pure Al stripes compare with that of Al-Si interconnects of the same dimensions as above, for which: *t*50 = 2.2×1015 *wdj*–n exp(0.54 eV/*kT*) | | | | | - | - | - | | | a | the same | | | b | less by 10% | | | c | more by 10% | | | d | more by 100,000% | 4. Which material properties are desirable for use as interconnects? | | | | | - | - | - | | | a | low availability | | | b | low resistivity | | | c | low melting point | | | d | low electromigration resistance | 5. Which material is the mostly used in vias? | | | | | - | - | - | | | a | Aluminium-based alloys | | | b | Tungsten | | | c | Titanium Nitride | | | d | Silver |### Deeper questions*The following questions require some thought and reaching the answer may require you to think beyond the contents of this TLP.*6. In a solid conductor, the mechano diffusion force, *Fm*, acting on an atom, in a gradient of hydrostatic stress, *dσH* / *dx*, is given by: > > > ![](eqn/eqn_questions/eq0001L.gif) > > > > > > where *C*0 = no. of atoms per unit volume. A conductor has along it, a segment of length, *L*, which has a raised atomic diffusivity. The material of the conductor develops damage when the difference in hydrostatic stress between the ends of such a segment reaches the value (Δ*σH*)*crit*. Derive the critical condition for the damage to develop in terms of *L* and current density, *j*, in the conductor. 7. An IC has several levels of Al-based metallization linked by W vias. In a test, electromigration damage, develops in an Al-based conductor linking two vias when the current density flowing through it from one via to the other 10 μm away exceeds 9×109 A m-2. It is planned to use the same materials with a current density of 1.2×1010 A m-2. What condition must be met by the length of links between vias for damage to be avoided? How will the temperature of the operation affect the development of electromigration damage? ![](figures/image for question 7.png) 8. In the schematic diagram of a microelectronic device (below), where would electromigration-induced damage be observed? Give your reasons. What type of material would you use for the vias and interconnects, and why?![](figures/image for question 8.png) Going further = **Books and Papers:** * *Reliability and Failure of Electronic Materials and Devices* by Milton Ohring, Academic Press, San Diego, 1998. Provides a good overall explanation on the topic of electromigration * Review Article: Electromigration in integrated circuit conductors by J R Lloyd (*J. Phys. D: Appl. Phys*. **32** (1999) R109-R118). A good summary on electromigration in integrated circuit conductors * VLSI Technology by S.M. Sze (editor), Mc-Graw-Hill Book Company, New Jersey, 1988. Contains a good section on metallization and its choice of use for different materials   **Website:** * > > It contains a nice simple description of the electromigration process. Several other searches can be made within Wikipedia, the free encyclopedia, to look up technical terms. > > >  
Aims On completion of this Teaching and Learning Package you should: * Be aware of the structure, origin and use of the Ellingham Diagram; * Be able to use the interactive diagram included to find thermodynamic data quickly and effectively including: + Δ*G*, *k*, pO2 for a standard and non-standard reactions across a range of temperatures; + Relative stabilities of elements with respect to oxidation etc or conversely that of oxides, sulphides etc; Before you start There are no special prerequisites for this TLP. Introduction The Ellingham Diagram, originally constructed for oxides, is a tool to find a variety of thermodynamic data quickly, without the need for repetitive calculation. The diagram is essentially a graph representing the thermodynamic driving force for a particular reaction to occur, across a range of temperatures. With the data for several reactions plotted on the same graph, the relative stabilities of different elements with respect to, for example, their oxides, can be seen. It is also possible to compare the relative driving force for an element for oxidation or sulphidation in an environment containing both oxygen and sulphur as reactants. Thermodynamics Here we shall go through the basic thermodynamics that lies behind use of the Ellingham diagram. First, we will establish the link between the thermodynamics of a reaction and its chemistry. The Gibbs free energy, *G*, of a system can be described as the energy in the system available to do work. It is one of the most useful state functions in thermodynamics as it considers only variables contained within the system, at constant temperature and pressure. It is defined as: \(G = H - TS\)        (1) Here, *T* is the temperature of the system and *S* is the entropy, or disorder, of the system. *H* is the enthalpy of the system, defined as; \(H = U + {\rm{p}}v\)        (2) Where *U* is the internal energy, p is the pressure and *v* is the volume. To see how the free energy changes when the system is changed by a small amount, we can differentiate the above functions: \({\rm{d}}G = {\rm{d}}H - T{\rm{d}}S - S{\rm{d}}T\)        (3) and; From the first law, \({\rm{d}}U = {\rm{d}}q' - {\rm{d}}w\)        (5) and from the second law, \({\rm{d}}S = \) \({{{\rm{d}}q'} \over T} \) ,        (6) we see that, \(\displaylines{ {\rm{d}}G = {\rm{d}}q' - {\rm{d}}w + {\rm{pd}}v + v{\rm{dp}} - T{\rm{d}}S - S{\rm{d}}T \cr = - {\rm{d}}w + {\rm{pd}}v + v{\rm{dp}} - S{\rm{d}}T \cr = v{\rm{dp}} - S{\rm{d}}T \cr} \)        (7) since work, \({\rm{d}}w = {\rm{pd}}v\) The above equation shows that if the temperature and pressure are kept constant, we see that the free energy does not change. This means that the Gibbs free energy of a system is unique at each temperature and pressure. At a constant temperature d*T* = 0 and so \({\rm{d}}G = v{\rm{dp}}\)        (8) We can find *G* for the system by integration. To do this we need the systems equation of state, to give a relationship between *v* and p. We will consider an ideal gas. For one mole of an ideal gas the equation of state is; \(v = \) \({{RT} \over {\rm{p}}}\) ,       (9) so (8) becomes \({\rm{d}}G =\) \( RT{{{\rm{dp}}} \over {\rm{p}}}\)         (10) Integrating: \(G = RT\ln \left( {\rm{p}} \right) + const.\)        (11) This we can express as \(G = G^\circ + RT\ln \)\(\left( {{{\rm{p}} \over {{\rm{p}}^\circ }}} \right)\)        (12) We define *G*° to be the standard free energy at the standard pressure, p°. These standard values are nothing more than lower integration constants, but using them is very useful, as we shall see. They are a consequence of the fact that one can only describe energy *changes* absolutely – there is no absolute energy scale, so the energy value we give to a system is arbitrary. Chemical Reactions We will now see why studying the free energy of a system is useful in determining its behaviour. The free energy change, ΔG, of a chemical reaction is the difference in free energy between the products of the reaction and the reactants. If the free energy of the products is less than the free energy of the reactants there will be a driving force for the reaction to occur. For the reaction $${\rm{A}} + {\rm{B}} \to {\rm{C}}$$, the free energy change, \(\Delta G{\rm{ = }}{{\rm{G}}\_{\rm{C}}}{\rm{ - }}{{\rm{G}}\_{\rm{A}}}{\rm{ - }}{{\rm{G}}\_{\rm{B}}}{\rm{ }}\)                (13) \( = \Delta G^\circ + RT\ln \) \(\left( {{{{{{p\_c}} \over {{p^0}\_c}}} \over {{{{p\_A}{p\_B}} \over {{p^0}\_A{p^0}\_B}}}}} \right) \) (14)  $$ = \Delta G^\circ + RT\ln \left( {{{{p\_c}} \over {{p\_A}{p\_B}}}} \right)$$ if the standard states \(p\_A^ \circ = p\_B^ \circ = p\_C^ \circ = {\rm{1bar}}\). We see that the free energy change of a reaction is determined by the relative quantities of reactants and products. The Equilibrium Constant A chemical reaction will occur if the total free energy of the products is less than the total free energy of the reactants. (ie. The free energy change for the reaction is negative.) If the system containing the reactants and products is closed (if there is no input of reactants, for example), the concentration of reactants will decrease and the concentration of products will increase as the reaction proceeds. This will alter the state of the system and therefore alter the free energy change for the reaction (see equation 14, above). The reaction will continue if the free energy change remains negative. Hence, the system proceeds down a free energy *gradient* with respect to composition and this gradient provides the *driving force* for the reaction to proceed. The system alters the quantities of reactants and products in response to the driving force until a minimum in free energy is reached and the gradient is zero. This is a point of equilibrium. At equilibrium the free energy change for the reaction is equal to zero: $$\displaylines{ \Delta G = \Delta G^\circ + RT\ln \left( {{{{p\_C}} \over {{p\_A}{p\_B}}}} \right) \cr = 0 \cr} $$ Therefore $$\displaylines{ \Delta G^\circ = - RT\ln {\left( {{{{p\_C}} \over {{p\_A}{p\_B}}}} \right)\_{equilibrium}} \cr = - RT\ln {K\_P}. \cr}\;\;\;\;(15) $$      For the composition at equilibrium, the quotient is equal to GP - the equilibrium constant for the reaction at constant pressure. We see that the equilibrium composition of the system is defined by the standard free energy change, ΔG°. Equation 15 provides a link between the thermodynamics of a reaction and its chemistry. ΔG° for a reaction is hence a very useful value to know. The Ellingham diagram = The Ellingham diagram plots1 the standard free energy of a reaction as a function of temperature. Originally the values were plotted for oxidation and sulphidation reactions for a series of metals, relevant to the extraction of metals from their ores (extraction metallurgy). These reactions generally involve the reaction of a gaseous phase (the oxidising gas) with (almost) pure condensed phases (the metal and oxidised compounds). By using the diagram the standard free energy change for any included reaction can be found at any temperature. Along with allowing you to calculate the equilibrium composition of the system, the data on the diagram is useful in other ways, as we shall see. ![Example of an Ellingham diagram](images/Ellingham_2.jpg) Note: Due to shortage of reliable experimental data, some of these lines are not plotted over the complete range of temperature. - 1 Ellingham H. J. T., J Soc Chem Ind (London) **63** 125 (1944) Applications <! function openwindow(theURL,winName,features) { //v2.0 window.open(theURL,winName,features); } //>We shall consider the following reactions on the Ellingham diagram below: the oxidation of silver to form Ag2O (s); and the oxidation of cobalt to form CoO (s) and Co3O4: 4Ag (s) + O2 = 2Ag2O (s); 2Co + O2 (g) = 2CoO (s) 3Co + 2O2 (g) = Co3O4 (s). ![Example Ellingham Diagram for Co - Ag system](images/Ellingham_Co_Ag.jpg) The graph consists of lines of $$\Delta G^\circ = \Delta H^\circ - T\Delta S^\circ $$ Although ΔS and ΔH vary with temperature (see to see why), the changes are so small as to be negligible, and so the lines are approximately straight. This means that they are of the form $$y = mx + c$$ We can immediately see that the standard free energy change is greater (more negative) for the cobalt reaction relative to that of silver at all temperatures. This means that at all temperatures the equilibrium constant is larger for the cobalt reaction-the composition is further weighted towards the products of the reaction. This is the reason that metals that appear higher up on the diagram are more stable than those metals that appear lower down, and are more likely to be found in their pure solid form. The gradient of the two lines is approximately the same. We can see that the gradient of the lines is simply the standard entropy change for the reactions; $${{\partial \Delta G^\circ } \over {\partial T}} = - \Delta S^\circ $$ This is evident from the reactions, which both involve the elimination of one mole of gas - a large decrease in entropy. This is the reason for the positive slope of the lines. The reason for the change in slope is the change in phase of a component of the system, which alters the entropy change. As the standard free energy change for both reactions is still negative (below 460K) the large decrease in entropy must be counteracted by a large enthalpy of reaction. This is indeed the case. The intercept of the lines with 0K gives the enthalpy of the reaction: $${\left. {\Delta G^\circ } \right|\_{0K}} = \Delta H^\circ $$ We can hence see that the relative stability of the *oxide* of cobalt compared with the oxide of silver is due to the much larger standard enthalpy of reaction. Partial pressure of reacting gas <! function openwindow(theURL,winName,features) { //v2.0 window.open(theURL,winName,features); } //>Using equations (15) we can see that the equilibrium constant is related to the partial pressures of reacting gases: $${K\_P} = {{{p\_C}} \over {{p\_A}{p\_B}}}$$ for the reaction \({\rm{A}} + {\rm{B}} \to {\rm{C}}\). (Remember that these pressures must be related to a standard state.) For a metal oxidation reaction , 2M (s) + O2 (g) = 2MO (s) , the equilibrium constant has the form $${K\_P} = {1 \over {p\_{{O\_2}}^{}}}$$We can therefore find the equilibrium partial pressure of oxygen at a particular temperature from the value of Δ*G*°: $${\left. {{p\_{{O\_2}}}} \right|\_{eq.,T}} = \exp {{\Delta G^\circ } \over {RT}}$$ The equilibrium partial pressure of oxygen is the pressure at which the driving force for the reaction is zero. From equation 14 we see that if the partial pressure of oxygen is greater than this value, the free energy change for the reaction is negative and there is a driving force for the reaction to take place. Metal will be oxidised, and the partial pressure of oxygen will drop until it reaches equilibrium. This is effect described by Le Chateliers principle. If the partial pressure of oxygen is below the equilibrium value, oxidation is avoided. (In fact, the metal oxide will disassociate to form metal plus oxygen gas-this is because there is a driving force for the reaction to proceed *backward*. For this reason the equilibrium partial pressure is often known as the dissociation pressure.) Reading pO2 from the Ellingham diagram In to avoid calculating the equilibrium partial pressure for each value of Δ*G*°, Richardson2 added a nomographic scale to the Ellingham diagram. The equilibrium partial pressure is found as follows: A line is drawn from the origin of the graph (*T* = 0, Δ*G* = 0) through the point on the Ellingham line of interest, at the required temperature. The equilibrium partial pressure is read off at the point where the drawn line crosses the nomographic scale. ![Example Ellingham Diagram with nomographic scale](images/Ellingham_pO2.jpg) The scale is simply derived by considering the change in free energy of one mole of ideal gas, from p = 1 atm to p = P. We know that \(\Delta G^\circ = RT\ln {{\rm{p}}\_{{{\rm{O}}\_{\rm{2}}}}}\)and using equation 12 (\(G = G^\circ + RT\ln\)\( \left( {{{\rm{p}} \over {{\rm{p}}^\circ }}} \right)\)) we find that this is equal to the difference in free energy of a mole of gas at 1 atm and a mole of gas at P atm: $$\Delta G = RT\ln {\rm{P}}$$ ie. a line from the origin with gradient *R*lnP. A series of lines for different values of the partial pressure, P, is shown below. ![Example Ellingham Diagram showing series of lines for different P](images/Rlnp.jpg) - 2 F.D. Richardson and J.H.E. Jeffes, "The Thermodynamics of Substances of Interest in Iron and Steel Making from 0°C to 2400°C: I-Oxides," J. Iron and Steel Inst. (1948), **160** 261. Other gas mixtures The oxygen required to cause oxidation in the gas phase need not to come from oxygen gas. Consider the following reaction: 2CO (g) + O2 (g) = 2CO2 (g) For this reaction, $${K\_{{{CO} \over {C{O\_2}}}}} = {{p\_{C{O\_2}}^2} \over {p\_{CO}^2.{p\_{{O\_2}}}}}$$ or $${p\_{{O\_2}}} = {{p\_{C{O\_2}}^2} \over {p\_{C{O\_{}}}^2.{K\_{{{CO} \over {C{O\_2}}}}}}}$$ and hence $$\displaylines{ \ln {1 \over {{p\_{{O\_2}}}}} = \ln {K\_{{{CO} \over {C{O\_2}}}}} + 2\ln {{p\_{C{O\_{}}}^{}} \over {p\_{C{O\_2}}^{}}} \cr = {{ - \Delta G} \over {RT}} \cr} $$ We see that *p*O2 is equivalent to a ratio: $${{p\_{C{O\_2}}^{}} \over {p\_{CO}^{}}}$$ . Another nomographic scale may be added to the diagram, with a new origin, C where the CO/CO2 line crosses the y-axis. Similarly for the reaction 2H2 + O2 = 2H2O; *p*O2 is equivalent to $${{p\_{{H\_2}O}^{}} \over {p\_{{H\_2}}^{}}}$$ Adding a further nomographic scale to the diagram, we see that the equilibrium pressure ratios of CO and CO2 or H2 and H2O for a given oxidation of metal, or reduction of an oxide, can be deduced at a given temperature from the diagram. Reducing agents = A major application of the Ellingham diagram is the determination of the conditions required to reduce metal compounds, such as oxides or sulphides, to obtain pure metal. This is often the basis of extraction metallurgy, the extraction of metals from their ores. It is also very important in the recycling of metals. See the Ellingham Diagram section of for more information. A chemical is a reducing agent with respect to a particular metal when the free energy change for its oxidation is more negative than the free energy change of oxidation of the pure metal. This means that when the reducing agent is placed in a closed system containing the metal ore there is a driving force for the dissociation of metal and oxidiser (ie. oxygen or sulphur gas, for example). Consider the two oxidation reactions below, whose lines on the Ellingham diagram cross each other: 2A + O2 = 2AO       (a) and B + O2 = BO2.        (b) ![Example Ellingham Diagram for Rh - Oxide system](images/Ellingham_RhO2_Rh2O3.jpg) As we can see, the y-intercept, Δ*H*°(*a*) , of the first reaction is greater than the y-intercept of the second reaction, Δ*H*°(*b*) . Since the lines cross, the gradient of the second line, Δ*S*°(*b*) , is greater than that of the first. At the point that the lines cross the standard free energy changes for two reactions are equal. This means that a closed system containing the metals A and B will be at equilibrium. This can be shown by considering the reaction below, obtained by subtracting reaction (a) from reaction (b): B + 2AO = 2A + BO2 At T = TE , Δ*G* for this reaction is zero, and no reaction occurs. However above this temperature A is reduced by B, and below it B is reduced by A. Drawing a line from the origin through the point at which the lines cross and extending it to the nomographic scale for oxygen pressure gives the partial pressure of oxygen at equilibrium. Consider the reaction 3C + 2Fe2O3 = 3CO2 + 4Fe Δ*G* is only negative for this reaction above T=1020K. Hence, steel furnaces operate above 1020K. Other atmospheres = An Ellingham diagram may be drawn for other systems where the gaseous phase is not oxygen but another gas such as chlorine. This is useful for industrial processes where the environment is a mixture of gases. Non-standard states =<! function openwindow(theURL,winName,features) { //v2.0 window.open(theURL,winName,features); } //>In calculating the various thermodynamic quantities above we have assumed that the condensed metal phase is pure. In many cases, however, the metal is in the form of an alloy – perhaps in solid solution with other metals. In this case the metal is not in its standard state, and its ability to participate in a reaction – its – is reduced. This affects the free energy change for the reaction (which is no longer the *standard* free energy change) and means that the pressure of reacting gas required for equilibrium between the metal in solution and the pure product is changed. These values can be found by substituting the modified activity of the metal into the expression for the equilibrium constant and Δ*G*: $$\eqalign{ & K = {1 \over {{p\_{B({\rm{M in standard state)}}}}}} = {1 \over {{a\_M}p\_{B({\rm{M in alloy)}}}^{}}} \cr & \Rightarrow {p\_{B({\rm{M in alloy)}}}} = {{{p\_{B({\rm{M in standard state)}}}}} \over {{a\_M}}};{\rm{ and}} \cr} $$ $$\Delta G = \Delta G^\circ - RT\ln {a\_M}$$ where aM is the activity of the metal in the alloy. Graphically, a decrease in activity has the effect of rotating the Ellingham line for the reaction anti-clockwise around its intersection with T = 0. Summary = In the preceding pages we have seen that the standard free energy change for a reaction is a very useful quantity to know, in that its value affects the equilibrium constant and gives composition of the system. We have also seen that the Ellingham diagram is a convenient way of displaying the standard free energy change of many reactions at different temperatures. We have also seen how to read the value of pO2 from the Ellingham diagram. Finally, we have seen how the Ellingham diagram is used in extraction metallurgy to find the conditions needed for reduction of metal ores. The interactive Ellingham diagram = <! function openwindow(theURL,winName,features) { //v2.0 window.open(theURL,winName,features); } //> The interactive Ellingham diagram included within this TLP is a teaching and learning tool, which you can use to obtain a variety of useful thermodynamic information pertaining to a wide range of reactions. Click on this link to Click to on this link to *Please note that the diagram is for educational purposes only, and the accuracy of the data contained within it is not guaranteed.* Interactive Ellingham diagram user guide Using the interactive Ellingham diagram toolUsing the interactive Ellingham diagram tool **- Displaying reactions** The diagram consists of two screens. On the first screen, you can select the reactions you wish to include on the Ellingham diagram. First of all, select the system or systems you want to investigate. This will be dictated by the gaseous phases present in the system. Position the cursor over the name of each type of compound to highlight the elements for which data is available. Then you can select the metals involved in the reactions. In order to keep the diagram simple you may select up to two elements at one time, along with carbon and hydrogen. Press the “see Ellingham diagram” button to proceed to the second screen, where the selected reactions are displayed on the Ellingham diagram. On the second screen, the standard free energy of each reaction, in kJ mol-1 of reacting gas, is plotted as a function of temperature in Kelvin. Each selected metal may have several reactions, or none, associated with each gaseous phase. A change in gradient of any line may be associated with a phase change, either melting or boiling of the metal which is indicated by an m or b at the point, or melting or boiling of the resulting compound, indicated by (m) or (b). The temperature range over which data is available for each line is shown in purple. Each line is extrapolated to the absolute zero of temperature (thin green line) for simple comparison of the standard enthalpy changes for different reactions. **- Obtaining precise reaction data** In order to obtain specific data for a particular reaction, its line on the diagram may be selected by clicking anywhere along its length. The selected line is shown in brown and a crosshair appears at one end. Moving the slider bar along the x-axis of the diagram can move this crosshair to the temperature of interest, and an accurate readout of the standard free energy of the selected reaction is given at the chosen temperature. Data including the equilibrium constant for the reaction, the equilibrium partial pressure of the reacting gas and the equation of the line in the form A + BT is also given. These values are obtained by simple operations on the standard free energy value - see section on . A different line may be selected by simply clicking anywhere along its length. **- Non-standard reactions** If the metal is not in its standard state, the free energy change for the reaction changes, along with the partial pressure of reacting gas at equilibrium. Entering the activity of the metal in the blue-coloured box and pressing the “Compute” button can find non-standard reaction data at the selected temperature. **- Gas mixtures** If the oxidation reactions of carbon or hydrogen are selected, the Ellingham lines for each shown in black or red respectively, the ratio of the pressures of the gases CO and CO2, or the gases H2 and H2O is given as well as the pressure of O2 gas. The diagram can be restarted, so that different reactions can be seen, by pressing the “Restart” button. Questions = <! function openwindow(theURL,winName,features) { //v2.0 window.open(theURL,winName,features); } //> ### Quick questions*You should be able to answer these questions without too much difficulty after studying this TLP. If not, then you should go through it again!*1. The Ellingham diagram shows values of which thermodynamic quantity as a function of temperature? | | | | | - | - | - | | | a | Standard electrode voltage; | | | b | Standard free energy change of reaction; | | | c | Partial pressure of gas; | | | d | Enthalpy change of reaction. | 2. For a closed system at equilibrium at a temperature *T*, which of the following statements are true? 1. Δ*G* = 0; 2. Δ*G*° = 0; 3. Δ*H* = *TΔS*; 4. Δ*S* = 0. 3. Why are the slopes of many of the lines on the Ellingham diagram almost identical? | | | | | - | - | - | | | a | Most reactions involve the elimination of one mole of gas, so there is a similar standard enthalpy change of reaction. | | | b | Most reactions involve the elimination of one mole of gas, so there is a similar standard entropy change of reaction. | | | c | The activity of most of the metals is the same. | | | d | The partial pressure of the reacting gas is the same for all reactions. | 4. What thermodynamic quantity does the intercept at *T*=0 K for any standard free energy vs *T* line signify? | | | | | - | - | - | | | a | The approximate value of the standard entropy change; | | | b | The approximate value of the standard enthalpy change; | | | c | The equilibrium constant for the oxidation reaction; | | | d | Heat capacity of the oxide. | 5. What is the decomposition temperature (to the nearest 50 K) for Ag2O? (This question should be completed with the help of the included with this TLP). | | | | | - | - | - | | | a | 460 K | | | b | 500 K | | | c | 540 K | | | d | 620 K | 6. What is the decomposition temperature (to the nearest 50 K) for PdO? (This question should be completed with the help of the included with this TLP). | | | | | - | - | - | | | a | 1050 K | | | b | 1100 K | | | c | 1150 K | | | d | 1200 K | 7. Which of the following elements can be used to produce Cr from Cr2O3 at 1200K? (This question should be completed with the help of the included with this TLP). 1. Mg 2. Fe 3. Co 4. Al 8. Explain, using the Ellingham diagram, how a mixture of Cl2/O2 gas may be used to separate Zn from Fe in a galvanised scrap. (This question should be completed with the help of the included with this TLP). Going further = ### Books * D.R. Gaskell, *"Introduction to the Thermodynamics of Materials"* (Taylor and Francis, 1995) Third ed. Provides a good treatment of the Ellingham diagrams pp.347-395. * E.T. Turkdogan, *"Physical Chemistry of High Temperature Technology"* by (Academic Press, 1980) Main data source for the interactive diagram and deals with the physical chemistry of iron and steelmaking in ch.9.
Aims This TLP is designed to help you learn about epitaxial growth; the growth of a (usually thin) single crystalline layer in the same crystallographic orientation as its single crystal substrate. Epitaxial growth is widely used in the electronics industry to enable the deposition of precisely controlled thin layers of semiconductors or oxides for use in devices such as thin film transistors, diodes and lasers. One of the major film deposition techniques, which is implied in this TLP, is molecular beam epitaxy (MBE). In this technique low energy gas phase beams of atoms or molecules are directed at the crystalline substrate, usually in a vacuum. The resultant thin films are required to be single crystalline and may, or may not, be atomically flat. The exact form of the thin film (flatness, composition, strain, band gap ...) depends sensitively on the growth parameters, of which the most important is temperature. The aim of this TLP is to allow you to explore the significance of key parameters in the process, such as the substrate temperature and the various bond energies between the atoms involved. At the heart of the package is a two-dimensional simulation of the deposition of the atoms. Using this you can explore the effect of these parameters on the growth of a generic crystal which models many features of "real" systems involving semiconductors such as Si, GaAs and AlGaAs. The perfection of the thin film depends crucially on the mode of growth of the deposit. Three such modes are usually identified: * Frank-van der Merwe (one perfect monolayer at a time); * Volmer-Weber (island growth), and a combination of the two called * Stranski-Krastanov (one or more perfect layers followed by island growth). You are encouraged to explore the simulation until you can identify the physical reasons why each of these occurs, and thus the conditions most likely to lead to the preferred growth mode and thin film. Your knowledge will be applicable to other epitaxial growth techniques such as MOCVD and VPE, and not simply MBE. The questions associated with this TLP should help you to decide whether you have understood the key messages. Before you start You can view the epitaxy simulation simply as a rather pretty animation. However in order to fully appreciate what is going on you need to be familiar with the concepts of interatomic bonding, the migration of a single atom across the surface of a crystal (surface diffusion) and the thermal activation of processes (Arrhenius behaviour). You do not need to know anything specifically about semiconductors or opto-electronics. Introduction Epitaxy by MBE involves expensive vacuum deposition equipment such as that shown in the figure. ![](images/epitaxialGrowthEquipment.jpg) Figure 1 : Epitaxial growth equipment at the University of Liverpool It is used to deposit thin layers (usually less than a micrometre thick) intended to form the active layers in optoelectronic devices. Such layers must be flat, of precise composition with the appropriate concentration of dopant atoms, and may involve abrupt changes of composition, for instance in order to form quantum wells. The details of the arrangement of the deposited atoms is therefore of great importance: Not only must they adopt the same crystal structure and orientation as their substrate, but the surface of the growing crystal must either be flat or , if not , then rough in a controlled and predictable way. When the deposited material is of identical composition to the substrate we refer to homoepitaxy, while if the deposit is different from the substrate we refer to heteroepitaxy. The images in figures 2 and 3 show high resolution TEM images of typical layers, in which the columns of atoms can be seen. ![TEM micrograph of quantum wells in AlGaAs/GaAs imaged so that there is strong contrast between the layers of different composition](images/tem_AlGaAs.jpg) Figure 2 . TEM micrograph of quantum wells in AlGaAs/GaAs imaged so that there is strong contrast between the layers of different composition. The following image (Figure 3) shows the columns of atoms in alternating epitaxial layers of GaAs and AlAs. The interfaces can be seen to be flat to about one monolayer. The appearance of the AlAs is different from that of the GaAs despite their identical crystal structure because different planes are imaged in the two phases. ![](images/tem_GaAs_AlGaAs.jpg) Figure 3 : A high resolution TEM image of alternating GaAs and AlGaAs layers (Simone Montanari PhD thesis (2005)) Figure 4 illustrates the epitaxial growth of a complex compound. The substrate plane is (001) and the heavy atoms show as white dots in a square array. The interface is clearly very flat. ![Bi0.5Mn0.5FeO3 film (left hand side, lighter contrast), epitaxially grown on a strontium titanate (SrTiO3 ) single crystal substrate](images/2_1.jpg) Figure 4: Bi0.5Mn0.5FeO3 film (left hand side, lighter contrast), epitaxially grown on a strontium titanate (SrTiO3 ) single crystal substrate. The simulation and its limitations <! function openwindow(theURL,winName,features) { //v2.0 window.open(theURL,winName,features); } //>The most obvious limitation of the simulation is its restriction to two dimensions. Surface diffusion on the screen thus only takes place along a line, and atoms are rather more likely to meet each other than in three-dimensional reality. A second less-obvious consequence is that the crystallography of real semiconductors is not reproduced on the screen (silicon and GaAs adopt the "diamond cubic" structure whereas our simulation implies a close-packed fcc structure). Thirdly we have not allowed you to change the rate of production of the deposited atoms. In an MBE reactor this would be controlled via the temperature of the (solid or liquid) sources of gas phase atoms. The simulation must be regarded as a "model" which demonstrates many (but certainly not all) of the behaviours of the real crystals. Another simplification is that the possible existence of impurity or dopant atoms is ignored. This is obviously of huge importance in the device industry, but makes very little difference to the behaviour simulated in this TLP. However despite all these limitations the model is based soundly on the physics of atomic interactions, and incorporates key aspects of the movement of atoms across surfaces ("surface diffusion") including thermal activation. Atoms will therefore show a greater probability of moving from their original position, across the surface, at higher temperatures. It also takes into account the possibility that the relaxed lattice parameter of the deposited crystal might not be the same as that of the substrate crystal. This often occurs in heteroepitaxy, leading to the build-up of strain in the growing layer. The simulation correctly models many of the behaviours observed during the epitaxial growth of real crystals. The three common growth modes have the following characteristics: * Frank-van der Merwe: each layer of deposited atoms is completed before the next layer starts to form. The surface at any instant will be flat or will contain a few monolayer steps. * Volmer-Weber (island growth): each deposited atom attaches to an island (or incipient particle); islands grow appreciably before joining up to cover the substrate completely. The surface at any instant will not be flat, and the substrate may not be entirely covered. * Stranski-Krastanov: initially the deposited atoms form one or more perfect layers but this is followed by island growth. The surface is therefore initially flat but develops to become less flat. Click on this link to *Please note that the simulation is for educational purposes only, and the accuracy of the data contained within it is not guaranteed.* An effective way of using this TLP is to experiment for a few minutes with the simulation, in order to familiarise yourself with its operation, before attempting to answer the following questions, many of which will require that you experiment with the effect of specific variables on the simulated growth, in order to explore different growth regimes. The simulation offers you by default 100 atoms to deposit (three to four atomic layers), but if you need more to see how the film develops, simply increase the number using the slider control. Notes about the simulation parameters: * Temperature is shown as Tm, the temperature of the substrate expressed as a fraction of its melting temperature. * Bond strengths are in arbitrary units on a scale from 0 to 100, where zero implies no bonding and 100 is a strong bond which is unlikely to be broken except at high temperatures. * Lattice parameter difference is the percentage difference between the natural lattice parameters of the substrate material and the deposit material. If this is non-zero then for epitaxy to occur the deposit must be strained to fit the substrate and therefore strain energy will accumulate as growth occurs. Summary = After working through this TLP you should understand: * the importance of growing thin films which are flat at the atomic scale; * the reasons why three different types of growth can occur during epitaxial deposition; * the factors which determine the growth mode and hence the flatness of the resultant film; * the importance of strain in the growth of heteroepitaxial films. If you read some of the recommended further material (below) you will also appreciate the range of applications for epitaxial films and the existence of alternative growth techniques. Questions =<! function openwindow(theURL,winName,features) { //v2.0 window.open(theURL,winName,features); } //>Click on this link to You may need it to answer some of the questions, ### Quick questions*You should be able to answer these questions without too much difficulty after studying this TLP. If not, then you should go through it again!*1. At a medium temperature, say 0.5 Tm, which of these conditions favours layer (Frank-van der Merwe) growth? | | | | | | - | - | - | - | | | Adatom-adatom bond strength | Adatom-substrate bond strength | Lattice parameter difference | | | | | | | | | | - | - | - | - | - | - | - | | | a | | | | | | | - | - | - | - | | | 50 | 50 | 0 | | | | b | | | | | | | - | - | - | - | | | 20 | 10 | 0 | | | | c | | | | | | | - | - | - | - | | | 30 | 40 | 0 | | | | d | | | | | | | - | - | - | - | | | 20 | 50 | 0.6% | | 2. At a low temperature, say 0.3 Tm, which of these conditions favours island (Volmer-Weber) growth?| | | | | | - | - | - | - | | | Adatom-adatom bond strength | Adatom-substrate bond strength | Lattice parameter difference | | | | | | | | | | - | - | - | - | - | - | - | | | a | | | | | | | - | - | - | - | | | 50 | 10 | 1% | | | | b | | | | | | | - | - | - | - | | | 50 | 50 | 0.3% | | | | c | | | | | | | - | - | - | - | | | 20 | 60 | 0 | | | | d | | | | | | | - | - | - | - | | | 30 | 30 | 0.1% | | 3. At a high temperature, say 0.8 Tm, which of these conditions favours Stranski-Krastanov growth? | | | | | | - | - | - | - | | | Adatom-adatom bond strength | Adatom-substrate bond strength | Lattice parameter difference | | | | | | | | | | - | - | - | - | - | - | - | | | a | | | | | | | - | - | - | - | | | 0 | 40 | 0 | | | | b | | | | | | | - | - | - | - | | | 70 | 30 | 0 | | | | c | | | | | | | - | - | - | - | | | 80 | 20 | 0.1% | | | | d | | | | | | | - | - | - | - | | | 50 | 50 | 0.6% | |### Deeper questions*The following questions require some thought and reaching the answer may require you to think beyond the contents of this TLP.*4. At 0.7 Tm, what is the effect of lattice parameter difference in the following cases? Increase the number of atoms from 100 to 150 to see this effect.| | | | | | - | - | - | - | | | Adatom-adatom bond strength | Adatom-substrate bond strength | Lattice parameter difference |### Open-ended questions*The following questions are not provided with answers, but intended to provide food for thought and points for further discussion with other students and teachers.*5. Try to explain the results you saw in the sixteen runs of Questions 1 to 4. Predict a set of parameters which should give Stranski-Krastanov growth at 0.2Tm. e.g. [50, 50, 1] 6. Predict and confirm a set of conditions which might give a gas rather than a deposit [0,0,0] or [0,0,1] at any temperature, because no bonding of adatoms either to substrate or each other]### Deeper questions*The following questions require some thought and reaching the answer may require you to think beyond the contents of this TLP.*7. What happens at 0.5 Tm in the following conditions? Is this physically reasonable?| | | | | | - | - | - | - | | | Adatom-adatom bond strength | Adatom-substrate bond strength | Lattice parameter difference | | | 100 | 0 | 0 | 8. What happens as more and more atoms are added in the conditions of Q7? What drives this process? 9. Run the simulation with parameters which give you either pronounced SK growth or clear island formation. What shape are the resultant particles or islands? 10. What would you expect to happen to the growing layer if the lattice parameter difference was 5%? 11. If the substrate had been polished so that its surface was not exactly a low-index plane (e.g. perhaps 0.5 degrees away from exact {111}, would there be any occasions during growth when there were no steps on the growing surface? Going further = Most of the literature relating to epitaxy has arisen in the context of semiconducting or oxide systems. There are currently good articles on Wikipedia which can be found if you search for the following key words: More on the material covered in this TLP: * Stranski-Krastanov growth * Epitaxy * Molecular beam epitaxy ( *don't search for* MBE) For some ideas on why we grow epitaxial thin films: * Quantum well * Laser diode And for some background on other approaches to growth, search for: * Thin film solar cell
Aims On completion of this TLP you should: * Understand the basics of the finite element method * Be familiar with the concepts of nodes, elements and discretisation * Understand the direct stiffness method * Be able to construct an element stiffness matrix and a global stiffness matrix for 1-dimensional elements * Appreciate the importance of boundary conditions * Understand shape (interpolation) functions for 1-dimensional elements * Understand the difference between linear and non-linear static finite element problems * Be familiar with some common reasons for non-convergence Before you start There are no special prerequisites for this TLP, but it would be useful to be familiar with stress and strain, beam bending mechanics and matrices (see ). Introduction Overview The Finite Element Method (FEM) is a mathematical (numerical) technique for finding approximate solutions to partial differential equations. It is a technique which is very well suited to the study and analysis of complex physical phenomena, particularly those exhibiting non-linearity in geometry and/or material behaviour (which is often the case for most “real-world” situations). It is used frequently to tackle problems that are not readily amenable to analytical treatments. Such problems can be structural, thermal, electrical, magnetic, acoustic etc., either in isolation or when coupled. Coupled examples include (but are not limited to) thermomechanical – where constrained differential thermal expansion generates thermal-stresses, thermoelectric – where heat is generated in a material due to its resistance to current flow, and conjugate heat transfer – where moving fluids can remove heat from hot objects over which they pass. When the Finite Element Method is used to solve problems of this type it is often referred to as Finite Element Analysis (FEA). Premise - The premise is very simple; continuous domains (geometries) are decomposed into discrete, connected regions (or ***Finite Elements***). An assembly of ***element-level*** equations is subsequently solved, while conditions of kinematic compatibility are enforced, in order to establish the response of the complete domain to a particular set of boundary conditions. The premise is described to some degree in the figure below, which also lists the governing equations (in differential form) for a range of common physical phenomena (in 1-D). ![discretisation of continuous domains](images/fem_overview.jpg) Time and Lengthscales - Typical time and lengthscales for application of the finite element method are shown in the following figure. ![Time and lengthscales for application of FEM](images/fem_overview2.jpg) Example Videos: - The videos below have been produced from finite element analyses and include: Your browser does not support the video tag. FEM of plate perforation Your browser does not support the video tag. Punch Nodes, elements, degrees of freedom and boundary conditions = “Nodes”, “Elements”, “Degrees of Freedom” and “Boundary Conditions” are important concepts in Finite Element Analysis. When a domain (a geometric region) is meshed, it is decomposed into a series of discrete (hence finite) **ELEMENTS**. The meshed geometry of an exhaust manifold is shown in the figure below. It can be seen that the elements in the mesh conform very well to the geometry, and represent therefore a good approximation of the geometry. The manifold in this instance is meshed with a 3-dimensional brick element which contains 20 **NODES**. Adjacent elements are connected to each other **AT** the nodes. A node is simply a point in space, defined by its coordinates, at which **DEGREES OF FREEDOM** are defined. In finite element analysis a degree of freedom can take many forms, but depends on the type of analysis being performed. For instance, in a structural analysis the degrees of freedom are displacements (Ux, Uy and Uz), while in a thermal analysis the degree of freedom is temperature (T). In the exhaust manifold example, there are 4 degrees of freedom at each node – Ux, Uy, Uz and T, since the analysis is a coupled temperature-displacement analysis (due to thermal expansion effects).  These **FIELD VARIABLES** are calculated at every node from the governing equation. Field variable values between the nodes and within the elements are calculated using interpolation functions, which are sometimes called shape or base functions. ![Manifold](images/fem_manifold.jpg) Manifold and node map There are of course many types of element, covering the complete range of space dimension. The most common are shown in the figure below, along with the position of the nodes, where It can be seen that some of the elements have “midside” nodes – i.e. nodes positioned midway between the corner nodes. The edges of these “***higher order***” elements can therefore curve – making them suitable for capturing complex geometrical shapes (as in the manifold above). This is possible since these elements permit the solution between the nodes to vary in non-linear ways (see section on shape functions), which is an important feature when field variables change rapidly. ![FEM nodes](images/fem_nodes.jpg) Examples of FEM element types Boundary Conditions - Boundary conditions are specified values of the field variables (or their derivatives) on the boundaries of the field (the geometry). They fall in to three categories: Dirichlet conditions – where you prescribe the variable for which you are solving; Neumann conditions – where you prescribe a flux, which is the gradient of the dependent variable, and Robin conditions – which is a mixture of the two, where a relation between the variable and its gradient is prescribed. The following table features some examples from various physical fields that show the corresponding physical interpretation. | | | | | | - | - | - | - | | **Physics** | **Dirichlet** | **Neumann** | **Robin** | | Solid Mechanics | Displacement | Traction (Stress) | Spring | | Heat Transfer | Temperature | Heat Flux | Convection | | Pressure Acoustics | Acoustic Pressure | Normal Acceleration | Impedance | | Electric Currents | Fixed Potential | Fixed Current | Impedance | Direct stiffness method and the global stiffness matrix = Although there are several finite element methods, we analyse the Direct Stiffness Method here, since it is a good starting point for understanding the finite element formulation. We consider first the simplest possible element – a 1-dimensional elastic spring which can accommodate only tensile and compressive forces. For the spring system shown, we accept the following conditions: * *Condition of Compatibility* *– connected ends (nodes) of adjacent springs have the same displacements* * *Condition of Static Equilibrium – the resultant force at each node is zero* * *Constitutive Relation – that describes how the material (spring) responds to the applied loads* ![](images/stiffness_model_spring.jpg) *Model spring system* The constitutive relation can be obtained from the governing equation for an elastic bar loaded axially along its length: \[\frac{d}{{du}}\left( {AE\frac{{\Delta l}}{{{l\_0}}}} \right) + k = 0 \;\;\;\;\;(1)\] \[\frac{{\Delta l}}{{{l\_0}}} = \varepsilon \;\;\;\;\;(2) \] \[\frac{d}{{du}}\left( {AE\varepsilon } \right) + k = 0 \;\;\;\;\;(3) \] \[\frac{d}{{du}}\left( {A\sigma } \right) + k = 0 \;\;\;\;\;(4) \] \[\frac{{dF}}{{du}} + k = 0 \;\;\;\;\;(5) \] \[\frac{{dF}}{{du}} = - k \;\;\;\;\;(6) \] \[dF = - kdu \;\;\;\;\;(7) \] The spring stiffness equation relates the nodal displacements to the applied forces via the spring (element) stiffness. The minus sign denotes that the force is a restoring one, but from here on in we use the scalar version of Eqn.7. Derivation of the Stiffness Matrix for a Single Spring Element ![single spring element](images/single_spring_element.jpg) From inspection, we can see that there are two degrees of freedom in this model, ui and uj.  We can write the force equilibrium equations: \[{k^{\left( e \right)}}{u\_i} - {k^{\left( e \right)}}{u\_j} = F\_i^{\left( e \right)} \;\;\;\;\;(8) \] \[ - {k^{\left( e \right)}}{u\_i} + {k^{\left( e \right)}}{u\_j} = F\_j^{\left( e \right)} \;\;\;\;\;(9) \] In matrix form \[\left[ {\begin{array}{\*{20}{c}}{{k^e}}&{ - {k^e}}\\{ - {k^e}}&{{k^e}}\end{array}} \right]\left\{ {\begin{array}{\*{20}{c}}{{u\_i}}\\{{u\_j}}\end{array}} \right\} = \;\left\{ {\begin{array}{\*{20}{c}}{F\_i^{\left( e \right)}}\\{F\_j^{\left( e \right)}}\end{array}} \right\} \;\;\;\;\;(10) \] The order of the matrix is [2×2] because there are 2 degrees of freedom. Note also that the matrix is symmetrical. The ‘**element** stiffness relation is: \[[{K^{(e)}}]\{ {u^{(e)}}\} = \{ {F^{(e)}}\} \;\;\;\;\;(11) \] Where Κ(e) is the element stiffness matrix, u(e) the nodal displacement vector and F(e) the nodal force vector. (The element stiffness relation is important because it can be used as a building block for more complex systems. An example of this is provided later.) Derivation of a Global Stiffness Matrix - For a more complex spring system, a ‘global stiffness matrix is required – i.e. one that describes the behaviour of the complete system, and not just the individual springs. ![Complex spring system](images/complex_spring_system.jpg)   From inspection, we can see that there are two springs (elements) and three degrees of freedom in this model, u1, u2 and u3. As with the single spring model above, we can write the force equilibrium equations: \[{k^1}{u\_1} - {k^1}{u\_2} = {F\_1} \;\;\;\;\;(12) \] \[ - {k^1}{u\_1} + \left( {{k^1} + {k^2}} \right){u\_2} - {k^2}{u\_3} = {F\_2} \;\;\;\;\;(13) \] \[{k^2}{u\_3} - {k^2}{u\_2} = {F\_3 \;\;\;\;\;(14) }\] In matrix form \[\left[ {\begin{array}{\*{20}{c}} {{k^1}}&{ - {k^1}}&0\\ { - {k^1}}&{{k^1} + {k^2}}&{ - {k^2}}\\ 0&{ - {k^2}}&{{k^2}} \end{array}} \right]\left\{ {\begin{array}{\*{20}{c}} {{u\_1}}\\ {{u\_2}}\\ {{u\_3}} \end{array}} \right\} = \;\left\{ {\begin{array}{\*{20}{c}} {{F\_1}}\\ {{F\_2}}\\ {{F\_3}} \end{array}} \right\} \;\;\;\;\;(15) \] The ‘**global** stiffness relation is written in Eqn.16, which we distinguish from the ‘element stiffness relation in Eqn.11. \[\left[ K \right]\left\{ u \right\} = \;\left\{ F \right\} \;\;\;\;\;(16) \] Note the shared k1 and k2 at k22 because of the compatibility condition at u2. We return to this important feature later on. Assembling the Global Stiffness Matrix from the Element Stiffness Matrices Although it isnt apparent for the simple two-spring model above, generating the global stiffness matrix (directly) for a complex system of springs is impractical. A more efficient method involves the assembly of the individual element stiffness matrices. For instance, if you take the 2-element spring system shown, ![Complex spring system](images/complex_spring_system_1.jpg) split it into its component parts in the following way ![Complex spring part a](images/complex_spring_a.jpg)    ![Complex spring part b](images/complex_spring_b.jpg) and derive the force equilibrium equations \[{k^1}{u\_1} - {k^1}{u\_2} = {F\_1} \;\;\;\;\;(17) \] \[{k^1}{u\_2} - {k^1}{u\_1} = {k^2}{u\_2} - {k^2}{u\_3} = {F\_2} \;\;\;\;\;(18) \] \[{k^2}{u\_3} - {k^2}{u\_2} = {F\_3} \;\;\;\;\;(19) \] then the individual element stiffness matrices are: \[\left[ {\begin{array}{\*{20}{c}} {{k^1}}&{ - {k^1}}\\ { - {k^1}}&{{k^1}} \end{array}} \right]\left\{ {\begin{array}{\*{20}{c}} {{u\_1}}\\ {{u\_2}} \end{array}} \right\} = \;\left\{ {\begin{array}{\*{20}{c}} {{F\_1}}\\ {{F\_2}} \end{array}} \right\}\;\;{\rm{and}}\;\;\left[ {\begin{array}{\*{20}{c}} {{k^2}}&{ - {k^2}}\\ { - {k^2}}&{{k^2}} \end{array}} \right]\left\{ {\begin{array}{\*{20}{c}} {{u\_2}}\\ {{u\_3}} \end{array}} \right\} = \;\left\{ {\begin{array}{\*{20}{c}} {{F\_2}}\\ {{F\_3}} \end{array}} \right\} \;\;\;\;\;(20) \] such that the global stiffness matrix is the same as that derived directly in Eqn.15: ![Global matrix](images/global_matrix.PNG) *(Note that, to create the global stiffness matrix by assembling the element stiffness matrices, k22 is given by the sum of the direct stiffnesses acting on node 2 – which is the compatibility criterion. Note also that the indirect cells kij are either zero (no load transfer between nodes i and j), or negative to indicate a reaction force.)* For this simple case the benefits of assembling the element stiffness matrices (as opposed to deriving the global stiffness matrix directly) arent immediately obvious. We consider therefore the following (more complex) system which contains 5 springs (elements) and 5 degrees of freedom (problems of practical interest can have tens or hundreds of thousands of degrees of freedom (and more!)). Since there are 5 degrees of freedom we know the matrix order is 5×5. We also know that its symmetrical, so it takes the form shown below: ![Spring system with 5 degrees of freedom](images/spring_5_5.jpg) We want to populate the cells to generate the global stiffness matrix. From our observation of simpler systems, e.g. the two spring system above, the following rules emerge: * The term in location ii consists of the sum of the *direct stiffnesses* of all the elements meeting at node i * The term in location ij consists of the sum of the indirect stiffnesses relating to nodes i and j of all the elements joining node i to j * Add a negative for reaction terms (–kij) * Add a zero for node combinations that dont interact By following these rules, we can generate the global stiffness matrix: ![Global stiffness matrix](images/spring_matrix.jpg) This type of assembly process is handled automatically by commercial FEM codes Drag the springs into position and click 'Build matrix', then apply a force to node 5. You will then see the force equilibrium equations, the equivalent spring stiffness and the displacement at node 5.Solving for (u) - The unknowns (degrees of freedom) in the spring systems presented are the displacements uij. Our global system of equations takes the following form: ![Global system of equations](images/spring_solution.jpg) To find {u} solve \[{u}={F}{\left[ K \right]^{ - 1}}\;\;\;\;\;(22)\] Recall that \(\left[ k \right]{\left[ k \right]^{ - 1}} = {\rm{I}} = \;\;{\rm{Identitiy\;Matrix}}\;\; = \;\left[ {\begin{array}{\*{20}{c}} 1&0\\ 0&1 \end{array}} \right]\). Recall also that, in order for a matrix to have an inverse, its determinant must be non-zero. If the determinant is zero, the matrix is said to be singular and no unique solution for Eqn.22 exists. For instance, consider once more the following spring system: ![Complex spring system](images/complex_spring_system_1.jpg) We know that the global stiffness matrix takes the following form \[\left[ {\begin{array}{\*{20}{c}} {{k^1}}&{ - {k^1}}&0\\ { - {k^1}}&{{k^1} + {k^2}}&{ - {k^2}}\\ 0&{ - {k^2}}&{{k^2}} \end{array}} \right]\left\{ {\begin{array}{\*{20}{c}} {{u\_1}}\\ {{u\_2}}\\ {{u\_3}} \end{array}} \right\} = \;\left\{ {\begin{array}{\*{20}{c}} {{F\_1}}\\ {{F\_2}}\\ {{F\_3}} \end{array}} \right\} \;\;\;\;\; (23) \] The determinant of [K] can be found from: \[{\rm{det}}\left[ {\begin{array}{\*{20}{c}} a&b&c\\ d&e&f\\ g&h&i \end{array}} \right] = \left( {aei + bfg + cdh} \right) - \left( {ceg + bdi + afh} \right) \;\;\;\;\;(24)\] Such that: \[ \left( {{k^1}\left( {{k^1} + {k^2}} \right){k^2} + 0 + 0} \right) - \left( {0 + \left( { - {k^1} - {k^1}{k^2}} \right) + \left( {{k^1} - {k^2} - {k^2}} \right)} \right) \;\;\;\;\;\;(25)\] \[{\rm{det}}\left[ K \right] = \left( {{k^1}^2{k^2} + {k^1}{k^2}^2} \right) - \left( {{k^1}^2{k^2} + {k^1}{k^2}^2} \right) = 0 \;\;\;\;\;\;(26) \] Since the determinant of [K] is zero it is not invertible, but singular. There are no unique solutions and {u} cannot be found. If this is the case in your own model, then you are likely to receive an error message! Enforcing Boundary Conditions = By enforcing boundary conditions, such as those depicted in the system below, [K] becomes invertible (non-singular) and we can solve for the reaction force F1 and the unknown displacements u2 and u3, for known (applied) F2 and F3. ![](images/boundary_enforce.jpg) \[\left[ K \right] = \left[ {\begin{array}{\*{20}{c}}{{k^1} + {k^2}}&{ - {k^2}}\\{ - {k^2}}&{{k^2}}\end{array}} \right] = \left[ {\begin{array}{\*{20}{c}}a&b\\c&d\end{array}} \right] \;\;\;\;\;(27)\] \[det\left[ K \right] = ad - cb \;\;\;\;\; (28) \] \[det\left[ K \right] = \;\left( {{k^1} + {k^2}} \right){k^2} - {k^2}^2 = {k^1}{k^2} \ne 0 \;\;\;\;\; (29) \] Unique solutions for \({F\_1}\), \(\left\{ {{u\_2}} \right\}\) and \(\left\{ {{u\_3}} \right\}\) can now be found \[ - {k^1}{u\_2} = {F\_1}\] \[\left( {{k^1} + {k^2}} \right){u\_2} - {k^2}{u\_3} = {F\_2} = {k^1}{u\_2} + {k^2}{u\_2} - {k^2}{u\_3}\] \[ - {k^2}{u\_2} + {k^2}{u\_3} = {F\_3}\] ![Unique solutions](images/boundary_unique.jpg) In this instance we solved three equations for three unknowns. In problems of practical interest the order of \(\left[ K \right]\) is often very large and we can have thousands of unknowns. It then becomes impractical to solve for \(\left\{ u \right\}\) by direct inversion of the global stiffness matrix. We can instead use Gauss elimination which is much more suitable for solving systems of linear equations with thousands of unknowns. Gauss Elimination - We have a system of equations \[x - 3y + z = 4 \;\;\;\;\; \rm{(30)} \] \[2x - 8y + 8z = - 2 \;\;\;\;\; \rm{(31)} \] \[ - 6x + 3y - 15z = 9 \;\;\;\;\; \rm{(32)} \] when expressed in augmented matrix form \[\left[ {\left. {\begin{array}{\*{20}{c}}1&{ - 3}&1\\2&{ - 8}&8\\{ - 6}&3&{ - 15}\end{array}} \right|\begin{array}{\*{20}{c}}4\\{ - 2}\\9\end{array}} \right] \;\;\;\;\; \rm{(33)} \] We wish to create a matrix of the following form \[\left[ {\left. {\begin{array}{\*{20}{c}}{11}&{12}&{13}\\0&{22}&{23}\\0&0&{33}\end{array}} \right|\begin{array}{\*{20}{c}}1\\2\\3\end{array}} \right] \;\;\;\;\; \rm{(34)} \] Where the terms below the direct terms are zero. We need to eliminate some of the unknowns by solving the system of simultaneous equations. To eliminate x from row 2 (where R denotes the row) -2(R1) + R2       (35) \[ - 2\left( {x - 3y + z} \right) + \left( {2x - 8y + 8z} \right) = - 10 \;\;\;\;\; \rm{(36)} \] \[ - 2y + 6z = - 10 \;\;\;\;\; \rm{(37)} \] So that \[\left[ {\left. {\begin{array}{\*{20}{c}}1&{ - 3}&1\\0&{ - 2}&6\\{ - 6}&3&{ - 15}\end{array}} \right|\begin{array}{\*{20}{c}}4\\{ - 10}\\9\end{array}} \right] \;\;\;\;\; \rm{(38)} \] To eliminate x from row 3 6(R1) + R3       (39) \[6\left( {x - 3y + z} \right) + \left( { - 6x + 3y - 15z} \right) = 33 \;\;\;\;\; \rm{(40)} \] \[ - 15y - 9z = 33 \;\;\;\;\; \rm{(41)} \] \[\left[ {\left. {\begin{array}{\*{20}{c}}1&{ - 3}&1\\0&{ - 2}&6\\0&{ - 15}&{ - 9}\end{array}} \right|\begin{array}{\*{20}{c}}4\\{ - 10}\\{33}\end{array}} \right] \;\;\;\;\; \rm{(42)} \] To eliminate y from row 2 R2/2 \[ - y + 3z = - 5 \;\;\;\;\; \rm{(43)} \] \[\left[ {\left. {\begin{array}{\*{20}{c}}1&{ - 3}&1\\0&{ - 1}&3\\0&{ - 15}&{ - 9}\end{array}} \right|\begin{array}{\*{20}{c}}4\\{ - 5}\\{33}\end{array}} \right] \;\;\;\;\; \rm{(44)} \] To eliminate y from row 3 R3/3 \[ - 5y - 3z = - 11 \;\;\;\;\; \rm{(45)} \] \[\left[ {\left. {\begin{array}{\*{20}{c}}1&{ - 3}&1\\0&{ - 1}&3\\0&{ - 5}&{ - 3}\end{array}} \right|\begin{array}{\*{20}{c}}4\\{ - 5}\\{11}\end{array}} \right] \;\;\;\;\; \rm{(46)} \] And then -5(R2) + R3       (47) \[ - 5\left( { - y + 3z} \right) + \left( { - 5y - 3z} \right) = 36 \;\;\;\;\; \rm{(48)} \] \[ - 18z = 36 \;\;\;\;\; \rm{(49)} \] \[\left[ {\left. {\begin{array}{\*{20}{c}}1&{ - 3}&1\\0&{ - 1}&3\\0&0&{ - 18}\end{array}} \right|\begin{array}{\*{20}{c}}4\\{ - 5}\\{36}\end{array}} \right] \;\;\;\;\; \rm{(50)} \] \[ - 2 = z \;\;\;\;\; \rm{(51)} \] Substituting z = -2 back in to R2 gives y = -1 Substituting y = -1 and z = -2 back in to R1 gives *x* = 3 This process of progressively solving for the unknowns is called ***back substitution***. Interpolation/Basis/Shape Functions = Consider the temperature distribution along the one-dimensional fin in Fig.1. ![](images/interpolation_image.jpg) A one-dimensional continuous temperature distribution with an infinite number of unknowns is shown in (a). The fin is discretised in (b) – i.e. divided into 4 subdomains (or elements). The nodes are numbered consecutively from left to right, as are the elements. The elements are *first order* elements; the interpolation scheme between the nodes is therefore linear. Note that there are only 5 nodes for this system, since the internal nodes are shared between the elements. Since we are only solving for temperature, there are only 5 degrees of freedom in this model of the continuous system. It should be clear that a better approximation for *T(x)* would be obtained if the number of elements was increased (i.e. if the element lengths were reduced). It is also apparent that the nodes should be placed closer together in regions where the temperature (or any other unknown solution) changes rapidly. It is useful also to place a node wherever a step change in temperature is expected and where a numerical value of the temperature is needed. It is good practice to continue to increase the number of nodes until a converged solution is reached. In (c), the fin has been divided into two subdomains – elements 1 and 2. However, in this instance we have chosen to use a *second order* (quadratic) element. These elements contain ‘midside nodes as shown, and the interpolation between the nodes is quadratic which permits a much closer approximation to the real system. For this model system there are still just 5 degrees of freedom. However, the analysis takes longer for (c) than it does for (b) because the quadratic interpolation (which calculates the temperature at locations between the nodes) is more demanding than the corresponding linear case. (There is often a trade-off between a high number of first order elements requiring little computation and a smaller number of second order elements requiring more heavy computation to be made, which affects both the analysis time and the solution accuracy. The choice depends to a large extent on the problem being solved.) 1D first order shape functions We can use (for instance) the direct stiffness method to compute degrees of freedom at the element nodes. However, we are also interested in the value of the solution at positions inside the element. To calculate values at positions other than the nodes we interpolate between the nodes using shape functions. A one-dimensional element with length L is shown below. It has two nodes, one at each end, denoted i and j, and known nodal temperatures Ti and Tj. We can deduce automatically that the element is first order (linear) since it contains no ‘midside nodes.![](images/shape_1D.jpg) One dimensional linear element with temperature degrees of freedom We need to derive a function to compute values of the temperature at locations between the nodes. This interpolation function is called the shape function. We demonstrate its derivation for a 1-dimensional linear element here. Note that, for linear elements, the polynomial inerpolation function is first order. If the element was second order, the polynomial function would be second order (quadratic), and so on. Since the element is first order, the temperature varies linearly between the nodes and the equation for T is: \[ T\left( x \right) = a + bx \;\;\;\;\; \rm{(1)} \] We can therefore write: \[ {T\_i} = a + b{X\_i} \;\;\;\;\; \rm{(2)} \] \[ {T\_j} = a + b{X\_j} \;\;\;\;\; \rm{(3)} \] which are simultaneous. To determine the coefficients \(a\) and \(b\): \[ \frac{{{T\_i} - a}}{{{X\_i}}} = b \;\;\;\;\; \rm{(4)} \] \[ \frac{{{T\_j} - a}}{{{X\_j}}} = b \;\;\;\;\; \rm{(5)} \] \[ \frac{{{T\_i} - a}}{{{X\_i}}} = \;\frac{{{T\_j} - a}}{{{X\_j}}} \;\;\;\;\; \rm{(6)} \] \[ \left( {{T\_i} - a} \right){X\_j} = \;\left( {{T\_j} - a} \right){X\_i} \;\;\;\;\; \rm{(7)} \] \[ {T\_i}{X\_j} - a{X\_j} = \;{T\_j}{X\_i} - a{X\_i} \;\;\;\;\; \rm{(8)} \] \[ {T\_i}{X\_j} - {T\_j}{X\_i} = \;a\left( {{X\_j} - {X\_i}} \right) \;\;\;\;\; \rm{(9)} \] \[ \frac{{{T\_i}{X\_j} - {T\_j}{X\_i}}}{{\left( {{X\_j} - {X\_i}} \right)}} = \;a \;\;\;\;\; \rm{(10)} \] \[ \frac{{{T\_i}{X\_j} - {T\_j}{X\_i}}}{L} = \;a \;\;\;\;\; \rm{(11)} \] and \[ {T\_i} - b{X\_i} = a \;\;\;\;\; \rm{(12)} \] \[ {T\_j} - b{X\_j} = a \;\;\;\;\; \rm{(13)} \] \[ {T\_i} - b{X\_i} = {T\_j} - b{X\_j} \;\;\;\;\; \rm{(14)} \] \[ b{X\_j} - b{X\_i} = {T\_j} - {T\_i} \;\;\;\;\; \rm{(15)} \] \[ b\left( {{X\_j} - {X\_i}} \right) = {T\_j} - {T\_i} \;\;\;\;\; \rm{(16)} \] \[ b = \frac{{{T\_j} - {T\_i}}}{{\left( {{X\_j} - {X\_i}} \right)}} \;\;\;\;\; \rm{(17)} \] \[ b = \frac{{{T\_j} - {T\_i}}}{L} \;\;\;\;\; \rm{(18)} \] Substitution of Eqns.11 and 18 into Eqn.1 yields: \[ T\left( x \right) = \frac{{{T\_i}{X\_j} - {T\_j}{X\_i}}}{L} + \left( {\frac{{{T\_j} - {T\_i}}}{L}} \right)x \;\;\;\;\; \rm{(19)} \] \[ T\left( x \right) = \frac{{{T\_i}{X\_j} - {T\_j}{X\_i}}}{L} + \frac{{{T\_j}x - {T\_i}x}}{L} \;\;\;\;\; \rm{(20)} \] \[ T\left( x \right) = \frac{{{T\_i}{X\_j}}}{L} - \frac{{{T\_j}{X\_i}}}{L} + \frac{{{T\_j}x}}{L} - \frac{{{T\_i}x}}{L} \;\;\;\;\; \rm{(21)}\] \[ T\left( x \right) = \frac{{{T\_i}{X\_j}}}{L} - \frac{{{T\_i}x}}{L}{\rm{\;}} + \frac{{{T\_j}x}}{L} - \frac{{{T\_j}{X\_i}}}{L} \;\;\;\;\; \rm{(22)}\] \[ T\left( x \right) = {T\_i}\left( {\frac{{{X\_j} - x}}{L}} \right) + {T\_j}\left( {\frac{{x - {X\_i}}}{L}} \right) \;\;\;\;\; \rm{(23)}\] It should be clear from Eqn.23 that the nodal temperature values are multiplied by linear functions of \(x\) – the shape functions. The functions are denoted by \(S\) with a subscript to indicate the node with which a specific shape function is associated. In the case presented: \[ {S\_i} = \;\left( {\frac{{{X\_j} - x}}{L}} \right) \;\;\;\;\; \rm{(24)}\] \[ {S\_j} = \;\left( {\frac{{x - {X\_i}}}{L}} \right) \;\;\;\;\; \rm{(25)}\] And Eqn.23 becomes \[ T\left( x \right) = {S\_i}{T\_i} + {S\_j}{T\_j} \;\;\;\;\; \rm{(26)}\] In matrix form \[ T\_x^e = \left[ {\begin{array}{\*{20}{c}}{{S\_i}}&{{S\_j}}\end{array}} \right]\left\{ {\begin{array}{\*{20}{c}}{{T\_i}}\\{{T\_j}}\end{array}} \right\} \;\;\;\;\; \rm{(27)}\] For the case shown below, calculate T at x = 3.3. | | | | - | - | | Text Box: T(x=3.3)=T_i ((X_j-x)/L)+T_j ((x-X_i)/L) T(x=3.3)=50((5-3.3)/2)+54((3.3-3)/2) T(x=3.3)=50.6 | \[T\left( {x = 3.3} \right) = {T\_i}\left( {\frac{{{X\_j} - x}}{L}} \right) + {T\_j}\left( {\frac{{x - {X\_i}}}{L}} \right)\] \[T\left( {x = 3.3} \right) = 50\left( {\frac{{5 - 3.3}}{2}} \right) + 54\left( {\frac{{3.3 - 3}}{2}} \right)\] | 1-dimensional linear element with known nodal temperatures and positions From inspection of Eqn.26 we can deduce that each shape function has a value of 1 at its own node and a value of zero at the other nodes. The sum of the shape functions sums to one. The shape functions are also first order, just as the original polynomial was. The shape functions would have been quadratic if the original polynomial had been quadratic. A continuous, piecewise smooth equation for the one dimensional fin first shown above can be constructed by connecting the linear element equations. We know that the temperature at any point in any element can be found from the nodal temperatures Ti and the shape functions Si. For the following system: \[T\_x^e = {S\_i}{T\_i} + {S\_j}{T\_j}\;\;\;\;\;\;\;{X\_i} \le x \le {X\_j} \;\;\;\;\; \rm{(28)} \] | | | | - | - | | | **Node** | | **Element #** | ***i*** | ***j*** | | 1 | 1 | 2 | | 2 | 2 | 3 | | 3 | 3 | 4 | | 4 | 4 | 5 | ![](images/shape_4_element.jpg) \[T\_x^1 = S\_1^1{T\_1} + S\_2^1{T\_2} \;\;\;\;\;\; S\_1^1 = \frac{{{X\_2} - x}}{{{X\_2} - {X\_1}}} \;\;\;\;\;\; S\_2^1 = \frac{{x - {X\_1}}}{{{X\_2} - {X\_1}}} \;\;\;\;\;\; {X\_1} \le x \le {X\_2} \;\;\;\;\;\; \rm{(29)} \] \[T\_x^2 = S\_2^2{T\_2} + S\_3^2{T\_3} \;\;\;\;\;\; S\_2^2 = \frac{{{X\_3} - x}}{{{X\_3} - {X\_2}}} \;\;\;\;\;\; S\_3^2 = \frac{{x - {X\_2}}}{{{X\_3} - {X\_2}}} \;\;\;\;\;\; {X\_2} \le x \le {X\_3} \;\;\;\;\;\; \rm{(30)} \] \[T\_x^3 = S\_3^3{T\_3} + S\_4^3{T\_4} \;\;\;\;\;\; S\_3^3 = \frac{{{X\_4} - x}}{{{X\_4} - {X\_3}}} \;\;\;\;\;\; S\_4^3 = \frac{{x - {X\_3}}}{{{X\_4} - {X\_3}}} \;\;\;\;\;\; {X\_3} \le x \le {X\_4} \;\;\;\;\;\; \rm{(31)} \] \[T\_x^4 = S\_4^4{T\_4} + S\_5^4{T\_5} \;\;\;\;\;\; S\_4^4 = \frac{{{X\_5} - x}}{{{X\_5} - {X\_4}}} \;\;\;\;\;\; S\_4^3 = \frac{{x - {X\_4}}}{{{X\_5} - {X\_4}}} \;\;\;\;\;\; {X\_4} \le x \le {X\_5} \;\;\;\;\;\; \rm{(32)} \] \(S\_2^1 \ne S\_2^2\), \(S\_3^2 \ne S\_3^3\) and \(S\_4^3 \ne S\_4^4\) The temperature gradient through an individual element \(\frac{{d{T^e}}}{{dx}}\) can be found from the derivative of Eqn.33 \[T\_x^e = {T\_i}\left( {\frac{{{X\_j} - x}}{L}} \right) + {T\_j}\left( {\frac{{x - {X\_i}}}{L}} \right) \;\;\;\;\;\; \rm{(33)} \] \[T\_x^e = \frac{{{X\_j}{T\_i}}}{L} - \frac{{x{T\_i}}}{L} + \frac{{x{T\_j}}}{L} - \frac{{{X\_i}{T\_j}}}{L} \;\;\;\;\;\; \rm{(34)} \] \[T\_x^e = \frac{{x{T\_j}}}{L} - \frac{{x{T\_i}}}{L} + \frac{{{X\_j}{T\_i}}}{L} - \frac{{{X\_i}{T\_j}}}{L} \;\;\;\;\;\; \rm{(35)} \] \[T\_x^e = \frac{x}{L}\left( {{T\_j} - {T\_i}} \right) + \frac{1}{L}\left( {{X\_j}{T\_i} - {X\_i}{T\_j}} \right) \;\;\;\;\;\; \rm{(36)} \] such that \[\frac{{dT\_x^e}}{{dx}} = \frac{{{T\_j} - {T\_i}}}{L} \;\;\;\;\;\; \rm{(37)} \] We can check that our shape functions are correct by knowing that the shape function derivatives sum to zero. 1D second order shape functions = A one-dimensional quadratic element is shown in Fig.4. We can deduce immediately that the element order is greater than one because the interpolation between the nodes in non-linear. We can determine from inspection that the element is quadratic (second order) because theres a ‘midside node. We know therefore that the function approximating the solution is a second order polynomial: \[T\_x^e = a + bx + c{x^2} \;\;\;\;\; \rm{(38)} \] ![](images/quadratic_1D_element.jpg) The shape functions \({S\_i}\) can be determined by solving Eqn.38 using known \({T\_i}\) at known \({X\_i}\) to give: \[T\_x^e = {S\_i}{T\_i} + {S\_j}{T\_j} + {S\_k}{T\_k}\_j \;\;\;\;\; \rm{(39)} \] \[{S\_i} = \frac{2}{{{L^2}}}\left( {x - {X\_k}} \right)\left( {x - {X\_j}} \right) \;\;\;\;\; \rm{(40)} \] \[{S\_j} = \frac{{ - 4}}{{{L^2}}}\left( {x - {X\_i}} \right)\left( {x - {X\_k}} \right) \;\;\;\;\; \rm{(41)} \] \[{S\_k} = \frac{2}{{{L^2}}}\left( {x - {X\_i}} \right)\left( {x - {X\_j}} \right) \;\;\;\;\; \rm{(42)} \] Using the quadratic shape functions for a single element (Eqns.40-42), we can assemble a corresponding set of equations for a larger system: | | | | | | | | | | | | | | | | | | | | | | | | | | | | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | | Quadratic approx of T 2 elements.tif | | | | | - | - | | | **Nodes** | | **Element #** | ***i*** | ***j*** | ***k*** | | 1 | *i* | *j* | *k* | | | | | | | | **Nodes** | | **Element #** | ***i*** | ***j*** | ***k*** | | 2 | *k* | *m* | *n* | | | Quadratic approx of T 2 elements.tif | \[T\_x^1 = \left[ {\begin{array}{\*{20}{c}}{S\_i^1}&{S\_k^1}&{S\_j^1}\end{array}} \right]\left\{ {\begin{array}{\*{20}{c}}{{T\_i}}\\{{T\_k}}\\{{T\_j}}\end{array}} \right\}\] | | Quadratic approx of T 2 elements.tif | \[T\_x^2 = \left[ {\begin{array}{\*{20}{c}}{S\_k^2}&{S\_m^2}&{S\_n^2}\end{array}} \right]\left\{ {\begin{array}{\*{20}{c}}{{T\_k}}\\{{T\_m}}\\{{T\_n}}\end{array}} \right\}\] | \[T\_x^1 = S\_i^1{T\_i} + S\_j^1{T\_j} + S\_k^1{T\_k} \;\;\;\;\; \rm{(43)} \] \[S\_i^1 = \frac{2}{{{L^2}}}\left( {x - {X\_k}} \right)\left( {x - {X\_j}} \right)\] \[S\_j^1 = \frac{{ - 4}}{{{L^2}}}\left( {x - {X\_i}} \right)\left( {x - {X\_k}} \right)\] \[S\_k^1 = \frac{2}{{{L^2}}}\left( {x - {X\_i}} \right)\left( {x - {X\_j}} \right)\] \[T\_x^2 = S\_k^2{T\_k} + S\_m^2{T\_m} + S\_n^2{T\_n} \;\;\;\;\; \rm{(44)} \] \[S\_k^2 = \frac{2}{{{L^2}}}\left( {x - {X\_n}} \right)\left( {x - {X\_m}} \right)\] \[S\_m^2 = \frac{{ - 4}}{{{L^2}}}\left( {x - {X\_k}} \right)\left( {x - {X\_n}} \right)\] \[S\_n^2 = \frac{2}{{{L^2}}}\left( {x - {X\_k}} \right)\left( {x - {X\_m}} \right)\] The element shape functions are stored within the element in commercial FE codes. The positions Xi are generated (and stored) when the mesh is created. Once the nodal degrees of freedom are known, the solution at any point between the nodes can be calculated using the (stored) element shape functions and the (known) nodal positions. The order of the element and the number of elements in your geometric domain can have a strong effect on the accuracy of the solution. This is demonstrated in the following application which demonstrates how the number of elements (mesh density) can affect the accuracy of finite element model predictions. It compares the exact solution to the equation shown (which has an analytical solution) to predictions using the finite element method. In the application you can discretise (mesh) the "domain" using any number of elements between 1 and 20. The shape functions used are first order ones. The error between the exact solution and the FEM-predicted solution can be found by dragging the green cursor left and right through the domain. Typical steps during FEM modelling Consider a wall mounted bracket loaded uniformly along its length as in the figure below: ![](images/modelling_wall_mounted_bracket.jpg) Wall mounted bracket The geometry (field) is defined for us and is (relatively) complex. The boundary conditions are also defined and are: * A uniform force per unit length (Neumann condition) along the upper edge * Fixed x and y displacements along the clamped edge (Dirichlet condition) It is apparent that the bracket will respond mechanically under the action of the applied load and a system of internal stresses will develop (to balance the applied load). To calculate the stresses that develop we must first mesh (discretise) the domain, assemble the global stiffness matrix [K], and then determine the nodal displacements {u} and resultant forces {F} using some iterative numerical technique (Gauss elimination, for instance). It is then a relatively trivial exercise to compute the nodal stresses from the nodal displacements and to find the solution between the nodes and within the elements using shape functions. ![](images/modelling_displacement.jpg)\(\underrightarrow {{\sigma \_{ij}} = {C\_{ijkl}}{\varepsilon \_{kl}}}\)![](images/modelling_stress_field.jpg) However, it is important to be aware that certain combinations of the specified number of elements (mesh density) and the specified element order can give rise to solutions that are highly inaccurate. It is highly advisable (and good practice) to perform a mesh sensitivity study, whereby the effect on the solution of successively finer meshes is analysed in order to eliminate any mesh sensitivity. The following application demonstrates this point: The application below concerns the deflection of a CANTILEVER beam loaded at its end with an applied force. The width of the beam can be altered, although the beam length and beam depth are fixed constant. The beam can be made of either steel or aluminium and it can be loaded with a force of either 40 N or 80 N. The beam can then be meshed with either 2 elements through its thickness or 4 elements through its thickness. The elements can be either first or second order. For any combination chosen, a prediction of the beam deflection based on finite element calculations will be compared to predictions from (analytical) ordinary beam bending equations. It should become apparent that the error between the two decreases as the mesh density and element order are increased. Summary = * You should now understand the basics of the finite element method * You should be able to build a global stiffness matrix for a combination of 1-dimensional springs * You should understand the importance of boundary conditions (including types of boundary conditions) * You should be familiar with first and second order shape functions and you should be able to use these functions to calculate the value of a solution between nodes. * You should understand and appreciate that the finite element method provides approximate solutions only, and that the accuracy of your solution will depend on element type and mesh density.   Questions = ### Quick questions*You should be able to answer these questions without too much difficulty after studying this TLP. If not, then you should go through it again!*1. Which is true? For second order elements: | | | | | - | - | - | | | a | The interpolation between the element nodes is linear | | | b | The shape function derivatives sum to 1 | | | c | The solution between the nodes can be approximated using a second order polynomial | | | d | The solution between the nodes has no dependence on the solution at the nodes | 2. Which is true? For first order elements: | | | | | - | - | - | | | a | The shape functions sum to 0 | | | b | The shape function derivatives sum to 1 | | | c | Field variables vary linearly between the element nodes | | | d | The elements contain midside nodes | 3. Which is true? The error between an exact and finite element solution will always be reduced by: | | | | | - | - | - | | | a | Decreasing the element density | | | b | Increasing the element density | | | c | Increasing the element order | | | d | Decreasing the element order | 4. If your finite element mesh contains 16 nodes, how many degrees of freedom are there in a coupled thermomechanical simulation: | | | | | - | - | - | | | a | 64 | | | b | 48 | | | c | 24 | | | d | 16 | 5. In a structural mechanics analysis, what type of boundary condition is an applied pressure: | | | | | - | - | - | | | a | Dirichlet | | | b | Neumann | | | c | Robin | | | d | Dirichlet + Neumann | 6. Discretise the following function using three equal length elements between \(0 \le x \le 6\). Assume the elements are linear (first order), and calculate \(\phi (x = 3.2)\) using the finite element method. Compare your answer to the exact solution.\[\phi = x(x - 3.5)(x + 3) + 30\] 7. Discretise the same function using six equal length elements and find \(\phi (x = 3.2)\) using the finite element method. Compare your answer to the exact solution and to the answer obtained using a three element discretisation. 8. Discretise the same function using three equal length but QUADRATIC elements. Calculate \(\phi (x = 3.2)\) and compare your answer to the ones obtained previously. 9. Using an equal length, 4-element discretisation of \(f(x) = 10 - {x^2}\), calculate \(f(x = 0.6)\) and the error between the finite element and exact solutions. 10. For the system of springs below, determine the global stiffness matrix.![](images/q11a.png) 11. A system of 1-dimensional springs has the following global stiffness matrix. Draw the system of springs.![](images/q12a.png)
Aims After completing this TLP, you should: * Be aware of the atomic requirements for ferroelectricity. * Understand how this atomic structure leads to spontaneous polarisation, and how this polarisation can be switched. * Understand how this polarisation leads to the interesting electrical properties displayed by ferroelectrics. * Understand how these properties are made use of in today's technologies. Before you start This TLP should be fairly self-contained, but some knowledge of crystal structures is assumed. It may be helpful to read the teaching and learning package on and first. Introduction The ferroelectric effect was first observed by Valasek in 1921, in the Rochelle salt. This has molecular formula KNaC4H4O6·4H2O. The effect was then not considered for some time, and it wasn't until a few decades later that ferroelectrics came into great use. Nowadays, ferroelectric materials are used widely, mainly in memory applications. This TLP will show how the ferroelectric effect arises, and how it is usefully used. The dipole moment = To be ferroelectric, a material must possess a spontaneous dipole moment that can be switched in an applied electric field, i.e. spontaneous switchable polarisation. This is found when two particles of charge *q* are separated by some distance ***r***, i.e.: ![Diagram of two charged particles q separated by some distance r](images/img001.gif) The dipole moment, ***μ*** is: ***μ*** = *q*.***r*** In a ferroelectric material, there is a net permanent dipole moment, which comes from the vector sum of dipole moments in each unit cell, Σ***μ***. This means that it cannot exist in a structure that has a centre of symmetry, as any dipole moment generated in one direction would be forced by symmetry to be zero. Therefore, ferroelectrics must be non-centrosymmetric. This is not the only requirement however. There must also be a spontaneous local dipole moment (which typically leads to a macroscopic polarisation, but not necessarily if there are domains that cancel completely). This means that the central atom must be in a non-equilibrium position. For example, consider an atom in a tetrahedral interstice. ![Diagrams of non-polar and polar structures](images/img002.gif) In (A) the structure is said to be non-polar. There is no displacement of the central atom, and no net dipole moment. In (B) however, the central atom is displaced and the structure is polar. There is now an inherent dipole moment in the structure. This results in a polarisation. Polarisation Polarisation may be defined as the total dipole moment per unit volume, i.e. \[P = \frac{{\sum \mu }}{V}\] Materials are polarised along a unique crystallographic direction, in that certain atoms are displaced along this axis, leading to a dipole moment along it. Depending on the crystal system, there may be few or many possible axes. As it is the most common and easy to see, let us examine a tetragonal system that forms when cooled from the high temperature cubic phase, through the Curie temperature, of e.g. Tc=120°C in BaTiO3. In this system, the dipole moment can lie in 6 possible directions corresponding to the original cubic axes: ![Diagram of possible directions for dipole moment](images/img003.gif) In a crystal, it is likely that dipole moments of the unit cells in one region lie along a different one of the six directions to the dipole moments in another region. Each of these regions is called a domain, and a cross section through a crystal can look like this: ![Diagram of domains in a cross-section of a crystal](images/img004.jpg) A domain is a homogenous region of a ferroelectric, in which all of the dipole moments in adjacent unit cells have the same orientation. In a newly-grown single crystal, there will be many domains, with individual polarisations such that there is no overall polarisation. They often appear: ![Diagram of domains in a cross-section of a single crystal](images/img005.jpg)The polarisation of individual domains is organised such that +ve heads are held near -ve tails. This leads to a reduction in stray field energy, because there are fewer isolated heads and tails of domains. This is analogous to the strain energy reduction found in dislocation stacking.   Domain boundaries are arranged so that the dipole moments of individual domains meet at either 90°or 180°. In a polycrystal (one with more than one crystallographic grain), the arrangement of domains depends on grain size. If the grains are fine (<< 1 micron), then there is usually found to be one domain per grain. In larger grains there can be more than one domain in each grain. This is a micrograph showing the domains in a single grain. ![Micrograph showing domains within a single grain](images/img_micrograph199.gif) This micrograph is reproduced from the In this grain, the domains are twinned in such a way as to reduce the overall stray electric field energy. As each domain possesses its own dipole moment, we may switch dipole moments in order to encode information. Switching polarisation (1) In an electric field, *E*, a polarised material lowers its (volume-normalized) free energy by –*P.E*, (where *P* is the polarisation). Any dipole moments which lie parallel to the electric field are lowered in free energy, while moments that lie perpendicular to the field are higher in free energy and moments that lie anti-parallel are even higher in free energy, (+*P.E*). This introduces a driving force to minimise the free energy, such that all dipole moments align with the electric field. Let us start by considering how dipole moments may align in zero applied field: ![Diagram of stable dipole alignments](images/img006ab.jpg) These two moments are stable, because they sit in potential energy wells. The potential barrier between them can be represented on a free energy diagram: ![Free energy diagram](images/img006c.jpg) This material is considered to be homogenous. If the polarisation points left then we have: ![Free energy diagram](images/img006d.jpg) The electric field alters the free energy profile, resulting in a ‘tilting of the potential well: ![Free energy diagram](images/img006e.jpg) An increase in the electric field will result in a greater tilt, and lead to the dipole moments switching, leading to: ![Free energy diagram](images/img006f.jpg) Next we must look at the more realistic scenario in which domains form. Switching polarisation (2) Consider a material which is fully polarised, so that all of the dipole moments are aligned in the same direction. Then apply a reversed electric field over it. New domains with a reversed polarisation nucleate at the electrodes. They then grow towards the other electrode, forming needle domains. When they reach the other electrode, they then grow laterally until the polarization of the entire sample is reversed. This shows the origin of the hysteresis loop. The removal of the field will leave some polarisation behind, and only when the field is reversed does the polarisation start to lessen as new, oppositely poled domains form. They grow quickly however, giving a large change of polarisation for very little electric field. But to form an entirely reversed material, a large switching field is required. This is because of both defects in the crystal structure, in a manner similar to zener drag, and also to do with stray field energy. The polarisation of the material goes from a coupled pattern, with 180° boundaries, to a state in which many heads and tails are separated. This leads to the increase in stray field energy. Therefore, to attain this state, lots of energy has to be put in by a larger field. Here we show a how a minor hysteresis loop fits into the major loop above. ![Diagram showing hysteresis curve](images/img009.gif)The part of curve shown fits into the major hysteresis curve. There are three sections to this curve. 1) Reversible domain wall motion. 2) Linear growth of new domains. 3) New domains reaching the limit of their growth. Measurement of polarisation = Polarisation may be defined as: \[P = \frac{Q}{A}\] where *Q* is the charge developed on the plates (Coulombs) and *A* is the area of the plates (m2). A good ferroelectric has 10 μC cm-2 < *P* < 100 μC cm-2. We can measure the polarisation by using the classic Sawyer-Tower circuit: ![Diagram of Sawyer-Tower circuit](images/img010.jpg) In this experiment, the voltage is cycled by the signal generator. Its direction is reversed at high frequency, and the voltage across the reference capacitor is measured. The charge on the capacitor must be the same as the charge over the ferroelectric capacitor, as they are in series. This means the charge on the ferroelectric can be found by: Q = C × V where *C* is the capacitance of the reference capacitor, and *V* is the voltage measured over this capacitor. We can therefore represent the polarisation of a material in an oscillating electric field, by plotting the voltage applied to the material on the x-axis of the oscilloscope, and the surface charge on the y-axis. This can be done because the capacitance of the reference capacitor is much higher than the capacitance of the ferroelectric, so most of the voltage lies over the ferroelectric. It is only possible to measure *P* by cycling the polarisation through cycling the voltage across the ferroelectric. We cannot measure absolute values instantaneously , but can deduce absolute values from the changes measured when cycling the polarisation. Fabrication of a KNO3 ferroelectric capacitor = A capacitor can be made from potassium nitrate (KNO3), which is ferroelectric below 120°C. (The temperature dependence of ferroelectrics will be explained later.) The following video clip shows the construction of a KNO3 capacitor, and the hysteresis loop it displays. The circuit used is the standard Sawyer-Tower circuit. Your browser does not support the video tag. Demonstration to construct a KNO3 capacitorThe result is a hysteresis loop. This arises from the fact that a system does not respond immediately to a given set of external conditions. Rather, there is a history dependence and this is the basis for memory (two states are possible in *E*=0). The final hysteresis loop appears like this:  ![Hysteresis loop](images/img011.gif) When the field is removed, the polarisation does not disappear like a dielectric. (a→ c). The polarisation which remains after a material has been fully polarised and then had the field removed is called the remanent polarisation (*P*r). Only after a field is applied in the opposite direction to the original polarising field does the polarisation diminish significantly. There is a specific field which results in zero net polarisation (d). This is called the coercive field (*E*C). Finally, if a sufficiently strong electric field is applied in the reverse direction, the polarisation will reach its maximum value in the opposite direction (e). To understand how the polarisation switches we must consider domains more fully. Temperature dependence of the Hysteresis loop = We have now seen the way in which the hysteresis loop arises. However, there are more aspects to the hysteresis loop. We have only observed it at one particular temperature, one at which the material is ferroelectric. What happens if the temperature is raised? The hysteresis loop changes with temperature, becoming sharper and thinner, and eventually disappearing, i.e.: ![Hysteresis loops at different temperatures](images/img012.gif) As you can see, the polarisation increases at 90°C, as a result of a phase transition. Between this temperature and room temperature, the polarisation increases steadily, as a direct relation with temperature, such that: ΔP = p ΔT where *p* = pyroelectric coefficient (C m-2 T-1). **Why should this be?** This is a general behaviour (that does not apply to KNO3) that can arise for two reasons depending on the material. 1. Disorder. Each unit cell has its own dipole moment, which, when there is a net polarisation, are described as ordered. At high T, the direction of the dipole moments randomises, giving a disordered material with no net polarisation. 2. Phase transitions that can open up new possibilities for dipole moments to form. In this case, there is a jump at 0°C, and at 90°C, where the loop becomes taller. Barium titanate = Let us consider one of the most well-known ferroelectrics, barium titanate, (BaTiO3). It has this perovskite structure: ![Diagram of perovskite structure](images/img013a.gif) ![Diagram of perovskite structure](images/img013b.gif) Barium titanate and phase changes = The temperature at which the spontaneous polarisation disappears is called the Curie temperature, *T*C. Above 120°C, barium titanate has a cubic structure. It is therefore centro-symmetric and possesses no spontaneous dipole. With no spontaneous dipole the material behaves like a simple dielectric, such that its polarisation varies linearly with field. *T*C for barium titanate is 120°C. Below 120°C, it changes to a tetragonal phase, with an accompanying movement of the atoms. The movement of Ti atoms inside the O6 octahedra may be considered to be significantly responsible for the dipole moment: ![Diagram to show change ion structure during phase change](images/img014.gif) Cooling through 120°C causes the cubic phase of barium titanate to transform to a tetragonal phase with the lengthening of the c lattice parameter (and a corresponding reduction in a and b). The dipole moment may be considered to arise primarily due to the movement of Ti atoms with respect to the O atoms in the same plane, but the movement of the other O atoms (i.e. those O atoms above and below Ti atoms) and the Ba atoms is also relevant. ![Diagram to show change ion structure during phase change](images/img015a.gif) The diagram below shows the BaTiO3 structure with an O6 octahedron surrounding the Ti atom. ![Diagram to show change ion structure during phase change](images/img015b.gif) The switching to a cubic structure is the reason for the polarisation spontaneously disappearing above 120°C. Barium titanate has two other phase transitions on cooling further, each of which enhances the dipole moment: The phase which is reached after cooling to ~ 0°C from tetragonal is orthorhombic. ![Diagram to show change ion structure during phase change](images/img016a.gif) And then rhombohedral below -90°C: ![Diagram to show change ion structure during phase change](images/img017a.gif) ![Diagram to show change ion structure during phase change](images/img017b.gif) All of these ferroelectric phases have a spontaneous polarisation based to a significant extent on movement of the Ti atom in the O6 octahedra in the following way (using pseudo-cubic notation): ![Diagram to show change ion structure during phase change](images/img018a.gif)   Order of phase transitions Two common types of phase transition may be identified. These are named depending on how the order changes during the transition. In ferroelectrics this is the polarisation, which is called the order parameter. A first order transition is one which has a discontinuity in the order parameter itself, while a second order transition is one which has a discontinuity in the first derivative of the order parameter. Plots of the spontaneous polarisation vs. temperature appear as follows. | | | | - | - | | First order | Second order | | Plot of the spontaneous polarisation vs. temperature | Plot of the spontaneous polarisation vs. temperature | In a first order transition the polarisation varies continuously, until the Curie temperature at which there is a discontinuity. In a second order transition, the order parameter itself is a continuous function of temperature, but there is a discontinuity in its first derivative at *T*C. Ferroelectrics - why? = Ferroelectric materials are used for binary information storage in FeRAM (Ferroelectric Random Access Memory). The zeroes and ones in each ferroelectric capacitor correspond to a polarization that is up or down. The polarization state is set up or down by applying a positive or negative voltage, and the polarization stays up or down after removing this voltage. FeRAM therefore offers non-volatile data storage. However, to read FeRAM data, the polarization must be electrically cycled, which takes time and erases the data (destructive read). FeRAM was used in the Sony Playstation 2, and it has also been used in smart cards for Japanese railways. Summary = Ferroelectrics have now been in use for decades, but they are still an expanding field. Their use in computing will only increase as they are miniaturised. However, to do this, the way in which their properties vary on the microscale has to be understood, so this will be a target for future research. Ferroelectrics will be used for a long time to come, as their properties are unique. Questions = ### Quick questions*You should be able to answer these questions without too much difficulty after studying this TLP. If not, then you should go through it again!*1. What symmetry element must be absent for a material to be ferroelectric? | | | | | - | - | - | | | a | An axis of rotation. | | | b | A mirror plane. | | | c | A centre of symmetry. | | | d | An improper axis of rotation. | 2. Which of these is not a correct definition of polarisation? | | | | | - | - | - | | | a | The net dipole moment per unit volume. | | | b | The surface charge per unit area. | | | c | The movement of atoms giving rise to a dipole moment. | | | d | The net charge per dipole moment. | 3. Why are domains in crystals found in a manner such that their polarisation is coupled? | | | | | - | - | - | | | a | To reduce stray field energy. | | | b | To reduce dislocation strain energy. | | | c | To grow the crystal in a regular manner. | | | d | To give small grains. | 4. What is needed to make ferroelectric domains useful as a binary memory store? | | | | | - | - | - | | | a | Large domains. | | | b | A square hysteresis loop. | | | c | A large coercive field. | | | d | Small domains. | 5. If a ferroelectric, 50mm by 10 mm, has a measured surface charge of 2.5 x 10-4 Coulombs, and a lattice parameter of 5 x 10-10 m, what is the dipole moment in a single cubic unit cell? | | | | | - | - | - | | | a | 6.25 x 10-29 C.m | | | b | 6.25 x 10-35 C.m | | | c | 1.5625 x 10-35 C.m | | | d | 1.5625 x 10-29 C.m | 6. What field has to be applied to give a ferroelectric, with a square hysteresis loop, zero net polarisation? | | | | | - | - | - | | | a | The switching field. | | | b | The polarising field. | | | c | The coercive field. | | | d | The stray field. | Going further = ### Books * *Ferroelectrics: An Introduction to the Physical Principles* by J.C. Burfoot, D. Van Nostrand Company Ltd., 1967 ### Websites * You may wish to look at the related TLP on .  
Aims On completion of this TLP you should: * Understand ferromagnetism as a type of magnetism and some of the reasons an element is ferromagnetic * Be aware that magnetism is affected by temperature * Understand the factors contributing to the formation of magnetic domains * Know why hysteresis occurs, and the factors which affect it Before you start Most of this package does not assume any previous understanding of ferromagnetism. However a brief understanding of atomic and electronic structure in elements may aid some explanations. However it is not necessary! Introduction Magnets are used by millions of people worldwide everyday, but what they may not realise every time they open their fridge, listen to their mp3 player or turn on their computer is that they are using a quantum mechanical phenomenon! The understanding of magnetism is extremely important in the continuing development of modern technology. For example electronic device memory is an area of intensive research as there is continuous demand for more memory in smaller devices. Types of magnetism All magnetic materials contain *magnetic moments,* which behave in a way similar to microscopic bar magnetis. In order to define a ferromagnetism as a class of magnetism, it is easiest to compare the various properties of different possible types of magnetic material. These are principally: paramagnets, ferromagnets, antiferromagnets and ferrimagnets. **Paramagnetism** In a paramagnet, the magnetic moments tend to be randomly orientated due to thermal fluctuations when there is no magnetic field. In an applied magnetic field these moments start to align parallel to the field such that the magnetisation of the material is proportional to the applied field. ![Schematic showing the magnetic dipole moments randomly aligned in a paramagnetic sample. ](images/FigureA.gif) Figure A. Schematic showing the magnetic dipole moments randomly aligned in a paramagnetic sample. **Ferromagnetism** The magnetic moments in a ferromagnet have the tendency to become aligned parallel to each other under the influence of a magnetic field. However, unlike the moments in a paramagnet, these moments will then remain parallel when a magnetic field is not applied (this will be discussed later). ![Schematic showing the magnetic dipole moments aligned parallel in a ferromagnetic material](images/FigureB.gif) Figure B. Schematic showing the magnetic dipole moments aligned parallel in a ferromagnetic material. **Antiferromagnetism** Adjacent magnetic moments from the magnetic ions tend to align anti-parallel to each other without an applied field. In the simplest case, adjacent magnetic moments are equal in magnitude and opposite therefore there is no overall magnetisation. ![Schematic showing adjacent magnetic dipole moments with equal magnitude aligned anti-parallel in an antiferromagnetic material.](images/FigureC.gif) Figure C. Schematic showing adjacent magnetic dipole moments with equal magnitude aligned anti-parallel in an antiferromagnetic material. This is only one of many possible antiferromagnetic arrangements of magnetic moments. **Ferrimagnetism** The aligned magnetic moments are not of the same size; that is to say there is more than one type of magnetic ion. An overall magnetisation is produced but not all the magnetic moments may give a positive contribution to the overall magnetisation. ![Schematic showing adjacent magnetic moments of different magnitudes aligned anti-parallel](images/FigureD.gif) Figure D. Schematic showing adjacent magnetic moments of different magnitudes aligned anti-parallel. Below is a periodic table showing the elements and the types of magnetism at room temperature: ![Diagram of a periodic table showing elements coloured according to the type of magnetism they show at room temperature](images/FigureE.gif) Figure E. Diagram of a periodic table showing elements coloured according to the type of magnetism they show at room temperature.   Why are some materials magnetic? It was discovered by Oersted in 1820 that a magnetic compass needle was deflected when placed near to an electric current; this was a breakthrough in the understanding of magnetism that later led to Ampère observing that the magnetic field of a solenoid being identical to that of a magnet. ![Schematic showing the shape of the magnetic field around a bar magnet and a solenoid are identical](images/FigureF.gif) Figure F. Schematic showing the shape of the magnetic field around a bar magnet and a solenoid are identical. Ampère then hypothesised that all magnetic effects were due to current loops and that the magnetic effects in materials must be due to “molecular currents”, attributed to the movement of electrons. However, the currents predicted by this model were unfeasibly large and a second origin of magnetism was required. This second magnetic mechanism was spin, postulated by Dirac in 1928, after he solved the fully relativistic quantum mechanical equations governing the electron. ![Diagram to show the magnetic moment produced by an electron orbiting the nucleus and that produced by the spin of the electron](images/FigureG.gif) Figure G. Diagram to show the magnetic moment produced by an electron orbiting the nucleus and that produced by the spin of the electron. The spin of an electron is hard to visualise, but has the properties of a small magnetic moment pointing either “up” or “down”. Within an atom, electrons are arranged in *orbitals*, with a maximum of two electrons with opposite spin occupying each orbital (due to the Pauli Exclusion Principle). The orbitals are further grouped into *shells*. In all atoms except for hydrogen there is more than one electron and these electrons can interact with each other as well as with the nucleus, leading to “coupling”. Coupling As outlined above, the total magnetic moment of a free atom has two contributions from each electron: 1. The angular momentum as the electron orbits the nucleus (strictly, the momentum of the nucleus relative to the orbiting electron). This is effectively Ampères molecular current and is known as the orbital contribution. 2. The ‘spin of the electron itself In an atom with a single electron, there are just two magnetic fields produced which can interact (see above figure). The magnetic field from the electrons spin interacts with the magnetic field from its movement around the nucleus, leading to so-called *spin-orbit* coupling. In an atom with more than one electron, the total magnetic moment of the atom will depend on the spin-orbit (intra-electron), spin-spin (inter-electron) and orbit-orbit (also inter-electron) coupling. The spin-orbit coupling is weak for light atoms and generally can be ignored in calculating the total angular momentum. The total magnetic moment can be determined by simple vector addition of the fields. Important results of this coupling are as follows. When working out the moment on an atom, only *incomplete electron shells (groups of orbitals) contribute*. Furthermore, electrons arrange themselves in shells in such a way as to maximise total spin. Putting these two results together, we see that once a shell is more than half full, spins begin to pair up within orbitals and the available magnetic moment decreases. This underlies the existence of magnetic order in elements in the middle of the 3d series and the middle of the 4f series. Why are only some materials ferromagnetic? For ferromagnetism to occur there must be an internal driving force that causes parallel alignment of the spins of the electrons to be the more favourable state. Weiss explained this observed behaviour by proposing the presence of an internal interaction between the localised moments called a *molecular field*. Whilst Weiss phenomenological model represented a significant step forward, the microscopic explanation of his molecular field required the laws of quantum mechanics. The important term in the interaction of the localised moments is called the *exchange interaction*. The exchange interaction occurs due to the *Pauli Exclusion Principle*; if two electrons have different spins then they can occupy the same orbital (angular momentum state), hence be closer to each other and they will therefore have a stronger Coulomb repulsion. ![Schematic showing two electrons occupying orbital 1 producing a strong Coulomb repulsion](images/FigureH.gif) Figure H. Schematic showing two electrons occupying orbital 1 producing a strong Coulomb repulsion compared to Figure I. If the electrons have the same spin then they will occupy different orbitals and therefore have less Coulomb repulsion as they will be further apart. In this way, the exchange energy (the energy due to the repulsion between the two electrons) is minimsed. ![Schematic showing two electrons occupying different orbitals.](images/FigureI.gif)Figure I. Schematic showing two electrons occupying different orbitals. Therefore the Coulomb repulsion force favours the parallel alignment of all the electron spins as the exchange energy is minimised. The above rules help to explain the strong ferromagnetic order seen in iron, cobalt and nickel. The Flash demo below allows the electronic structure of the transition elements in the 3rd row of the periodic table and the numbers of unpaired electrons to be seen. Taking the elements shown in the demo above as an example, it is, however, not straightforward to predict the presence or form of magnetic order from the electronic configuration alone as not all elements with unpaired electrons are ferromagnetic. There are other factors, principally the atomic structure, that go towards determining the type of magnetism exhibited by the elements. See the discussion on band theory in reference 1. This theory can be summarised into rules: * In the elements, electrons maximise their total spin and so will occupy orbitals with one electron per orbital with all the spins parallel until all the available orbitals contain an electron. Once this has occurred the electrons pair up in these orbitals in pairs of opposite spin. * For atoms with filled electron shells the total atomic orbital angular momentum and total spin are zero; therefore there is no magnetic dipole moment. * For atoms with incomplete outer shells, only the incomplete shells need to be considered in calculating the total angular momentum and hence magnetic dipole moment. * If an atom has no incomplete shells then there is no permanent dipole. These atoms are called diamagnetic. * Materials that do contain unpaired electrons are ferromagnetic only if the atomic structure allows the cooperative parallel alignment of the magnetic dipole moments. Curie-Weiss law = Above a critical temperature *Tc*, the Curie temperature, all ferromagnetic materials become paramagnetic. This is because thermal energy is large enough to overcome the cooperative ordering of the magnetic moments. The susceptibility of a material, χ, indicates how dramatically a material responds to an applied magnetic field, and is defined as the ratio of the magnetisation of the material, *M*, and the applied magnetic field, *H*. \[\chi = \frac{M}{H}\;\;\;\;{\rm{equation}}\;1\] The magnetisation of a material, *M*, is defined as the magnetic moment per unit volume or per unit mass of a material and is dependent on the individual magnetic dipole moments of the atoms in the material and on the interactions of these dipoles with each other. Above the Curie Temperature there will be a change in the susceptibility as the material becomes paramagnetic, therefore giving the equation: \[\chi = \frac{C}{{T - {T\_c}}} = \frac{M}{H}\;\,\,\,{\rm{equation}}\;2\] where *C* is a constant. The graph below shows the saturation magnetisation (ie that obtained in a high magnetic field) of a ferromagnetic element, nickel, as a function of temperature. We see that the saturation magnetisation decreases with increasing temperature until it falls to zero at the Curie temperature where the material becomes paramagnetic: ![Variation of saturation magnetisation with temperature for Nickel](images/FigureJ.gif)Figure J. Variation of saturation magnetisation with temperature for Nickel. (Data from Weiss and Forrer, 1926) Differentiation of Equation 2 with respect to temperature shows that the susceptibility is a maximum at the Curie temperature. This is no surprise; it is easiest to increase the magnetic moment of the material by applying a magnetic field when the material is undergoing a transition between magnetic order and disorder. The graph below for nickel shows the susceptibility tending to infinity as the temperature moves closer to the Curie temperature: ![Variation of susceptibility with temperature for Nickel ](images/FigureK.gif) Figure K. Variation of susceptibility with temperature for Nickel (Sucksmith and Pearce, 1938) Domains = Domains are regions of a ferromagnetic material in which the magnetic dipole moments are aligned parallel. When the material is demagnetised the vector summation of all the dipole moments from all the domains equals zero. When the material is magnetised the vector summation of the dipoles gives an overall magnetic dipole (we will discuss this in more detail later). Why do domains occur? - From our previous discussion of the exchange interaction it would appear that the most stable state would be that of a single domain where all the electron spins would be aligned parallel. However, while this minimises the energy contribution arising from the exchange interaction, there are other contributions to the total magnetic energy that must be considered: * Magnetostatic energy * Magnetostrictive energy * Magnetocrystalline energy We will now consider each in turn: Magnetostatic energy If a material consists of a single domain then it behaves as a block magnet (Figure L (i) below) and so a “demagnetising field” (the blue arrows) must be present around the block. This external, demagnetising field has a magnetostatic energy that depends on the shape of the sample and is the field that allows work to be done by the magnetised sample (e.g. lifting another ferromagnetic material against the force of gravity). In order to minimise the total magnetic energy the magnetostatic energy must be minimised. This can be achieved by decreasing the external demagnetising field by dividing the material into domains (Figure L (ii)). Adding extra domains increases the exchange energy, as the domains cannot align parallel, however the total energy has been decreased as the magnetostatic energy is the dominant effect. The magnetostatic energy can be reduced to zero by a domain structure that leaves no external demagnetising field (Figure L (iii)). ![Schematic showing how the addition ](images/FigureL-1.gif) Figure L. Schematic showing how the addition of domains can reduce the external demagnetising field therefore reducing the magnetostatic energy. This is the main driving force for the formation of domains. Magnetocrystalline energy - Up until now, we have ignored the influence of the atomic lattice structure; however this also has an affect on the total energy of a magnetised sample. A ferromagnetic material has ‘*easy* crystallographic directions along which it is preferred that the magnetisation vector points and ‘*hard* directions along which a higher field is required to achieve the same magnetisation. Therefore, it is easiest to magnetise a ferromagnetic material along these easy axes, as shown by the schematic below. ![Schematic showing the difference in size of field required to achieve ](images/FigureM.gif) Figure M. Schematic showing the difference in size of field required to achieve the same magnetisation along easy and hard axes. There is an energy difference associated with magnetisation along the hard and easy axes which is given by the difference in the areas under (*M*,*H*) curves. This is called the magnetocrystalline energy. This energy can be minimised by forming domains such that their magnetisations point along the easy crystallographic directions. The ideal material might have easy crystallographic directions perpendicular to one another as then both the magnetostatic and magnetocrystalline energies could both be minimised (Figure L (iii) above). In the regions bounding the domains, the *domain walls,* there must be a change in the direction of the magnetisation. Therefore, it cannot be aligned along easy axes and so large domains with few domain walls minimise the magnetocrystalline energy. Below a table shows a summary of the easy, hard and intermediate directions in iron, nickel and cobalt: | | | | | | - | - | - | - | |   | **Fe** | **Ni** | **Co** | | bcc | fcc | hexagonal | | **Easy** | <100> | <111> | <1000> | | **Intermediate** | <110> | <110> |   | | **Hard** | <111> | <100> | <1010> | Magnetostrictive energy - When a ferromagnetic material is magnetised it changes length. This is known as magnetostriction; an increase in length along the direction of magnetisation is positive magnetostriction (e.g. in Fe), and a decrease in length is negative magnetostriction (e.g. in Ni). These length changes are usually extremely small; in the range of tens of parts per million. However, they do affect the domain structure of the material. In iron the change in length causes the domains of closure to attempt to elongate horizontally (shown in blue in Figure N (i)) and the vertical domains to attempt to elongate vertically (shown in green in Figure N (i)). It is impossible for both to be accommodated and so this causes elastic strain in the material. The elastic strain energy is proportional to the volume of the domains of closure hence magnetostrictive energy can be minimised by decreasing the size of these domains. Reducing the volume of the domains of closure also requires the primary domains to decrease in size and increase in number (Figure N (ii)). The addition of extra domain walls increases the magnetocrystalline and exchange energy contributions to the total energy. ![Schematic showing the attempted change in shape of the domains (i) ](images/FigureN.gif) Figure N. Schematic showing the attempted change in shape of the domains (i) and the actual change (ii) due to magnetostriction. The actual domain structure is a compromise of all the aforementioned energy contributions. Domain walls Domain walls are the regions between domains where the direction of magnetisation must change, usually by either 180° or 90°.  ![Diagram showing domain walls](images/FigureO.gif) Figure O. Diagram showing domain walls. The width of domain walls is controlled by the balance of two energy contributions: * Exchange energy * Magnetocrystalline energy The exchange energy favours wide walls where adjacent magnetic dipole moments can be as close to parallel as possible, whereas the magnetocrystalline energy favour sharp changes in the dipole moments between the favoured directions in the crystal so that as few dipole moments as possible point along “non-easy” directions. The actual width is determined by the minimum of the total energy. The most favourable domain walls are those which do not require an external demagnetising field. Three walls, or “boundaries” of this type are discussed below. 1. Twist Boundary The magnetisation perpendicular to the boundary does not vary across the domain wall hence no demagnetising fields are generated. ![Diagram showing the rotation of magnetic moments through ](images/FigureP.gif) Figure P. Diagram showing the rotation of magnetic moments through a 180° domain wall. 2. Tilt Boundary The magnetic moments rotate in such a manner that a constant angle is maintained between them and both the wall normal and the surface. ![Diagram showing the rotation of magnetic moments in a tilt boundary](images/FigureQ.gif) Figure Q. Diagram showing the rotation of magnetic moments in a tilt boundary 3. Néel Wall In thin films a Néel wall occurs, and the magnetic dipole moments rotate around an axis perpendicular to the surface of the film. These are favourable in thin films because the free poles are formed on the domain wall surface rather than the film surface, causing a reduction in magnetostatic energy. ![Diagram showing the rotation of magnetic moments in a thin film in a plan view](images/FigureR.gif) Figure R. Diagram showing the rotation of magnetic moments in a thin film in a plan view. Hysteresis Magnetic hysteresis is an important phenomenon and refers to the irreversibility of the magnetisation and demagnetisation process. When a material shows a degree of irreversibility it is known as *hysteretic*. We will now explore the physics behind ferromagnetic hysteresis. When a demagnetised ferromagnetic material is placed in an applied magnetic field the domain that has a direction closest to that of the applied field grows at the expense of the other domains. Such growth occurs by motion of the domain walls. Initially domain wall motion is reversible, and if the applied field is removed the magnetisation will return to the initial demagnetised state. In this region the magnetisation curve is reversible and therefore does not show *hysteresis*. The crystal will contain imperfections, which the domain boundaries encounter during their movement. These imperfections have an associated magnetostatic energy. When a domain wall intersects the crystal imperfection this magnetostatic energy can be eliminated as closure domains form. This pins the domain wall to the imperfection, as it is a local energy minima. The applied magnetic field provides the energy to allow the domain wall to move past the crystal imperfection, but the domains of closure cling to the imperfection forming spike-like domains that stretch as the domain wall moves further away. Eventually these spike domains snap off and the domain wall can move freely. As the spike domains snap off there is a discontinuous jump in the boundary leading to a sharp change in the magnetic flux, which can be detected by winding a coil around the specimen connected to a speaker. In doing so, crackling noises are heard corresponding to the spike domains breaking away from the domain walls. This phenomenon is known as the Barkhausen effect. Eventually all the domain walls will have been eliminated leaving a single domain with its magnetic dipole moment pointing along the easy axis closest to the direction of the applied magnetic field. Further increase in magnetisation can occur by this domain rotating away from the easy direction to an orientation parallel to that of the externally applied field. The magnetisation of the material at this stage is called the saturation magnetisation (see Figure J) . The ease of this final rotation is dependent on the magnetocrystalline energy of the material; some materials require a large field to reach this saturation magnetisation. If the external applied field is removed the single domain will rotate back to the easy direction in the crystal. A demagnetising field will be set up due to the single domain, and this field initiates the formation of reverse magnetic domains as these will lower the magnetostatic energy of the sample by reducing the demagnetising field. However the demagnetising field is not strong enough for the domain walls to be able to grow past crystal defects so they can never fully reverse back to their original positions when there is no external applied field. This results in the *hysteresis* curve as some magnetisation will remain when there is no external applied field. This magnetisation is called the remanent magnetisation, Br. The field required to reduce the magnetisation of the sample to zero is called the coercive field Hc. And the saturation magnetisation Bs is the magnetisation when all the domains are aligned parallel to the external field. These are shown on the schematic below: ![Schematic showing the general shape of the hysteresis curve with some relevant points marked](images/FigureS.gif) Figure S. Schematic showing the general shape of the hysteresis curve with some relevant points marked; Bs, the saturation magnetisation; Br, the remanent magnetisation; Hc, the coercive field. The movie clip below shows the domain structure as the material is subjected to a hysteresis cycle. Hard and soft magnets - Ferromagnetic materials are further classified as soft or hard. Hard magnets are also called permanent magnets. These require a large field to demagnetise (and magnetise). Generally hard magnets have a large remenant magnetisation. Soft magnets, however, are easily magnetised and demagnetised, they have a small coercive field, and generally a small remanent magnetisation. Soft and hard magnets have different applications depending on the ease of magnetisation and demagnetisation required. See the animation below for examples of hard and soft magnets, their hysteresis curves and applications. Anisotropy Magnetocrystalline anisotropy - We have already met magnetocrystalline anisotropy in the section about domains where it was stated that there were easy and hard axes in a crystalline material such that a magnetic moment would preferentially orient along the former. The magnetocrystalline anisotropy energy is the energy difference between samples magnetised along the easy and hard directions. It is important to note that the spontaneous magnetisation is the same for both directions but the external applied field required to reach this value is different. To produce the spontaneous magnetisation the applied field must re-orient the atomic orbital in order to re-orient the direction of the electron spin; this is due to the spin-orbit coupling. The atomic orbitals are strongly coupled to the lattice and so resist the change in orientation. In most magnetic materials the spin-orbit coupling is weak; however in some heavy rare-earth metals the coupling is strong. In the latter case, the resistance to re-orientating the domains away from the easy crystallographic directions is high and requires a large coercive field. Such materials have uses as permanent magnets. Magnetocrystalline anisotropy decreases with increasing temperature, until at temperatures approaching the Curie temperature there is no preferred orientation of the magnetisation. Shape Anisotropy in Polycrystals If a sample is non-spherical it is easier to magnetise with the dipole moments pointing along a long axis. The field required to magnetise the sample must overcome the demagnetising field, which is minimised if it the magnetisation lies along the long axis. The magnetisation produces surface poles, and the magnitude of the demagnetising field decreases as the distance between the poles increases. This therefore has consequences for the shape anisotropy of the sample: the more prolate a sample is, the closer the distance between poles becomes, and the larger the demagnetising field and shape anisotropy is (Figure T). ![Schematic showing the shape anisotropy constant for a prolate spheroid](images/FigureT.gif) Figure T. Schematic showing the shape anisotropy constant for a prolate spheroid Induced Magnetic Anisotropy - It is possible to produce anisotropy in polycrystals by using a treatment with directional characteristics giving texture to the sample. Possible processes include: * Casting * Wire-drawing * Annealing in presence of a magnetic field giving easy axis parallel to applied field * Rolling Summary = This TLP has covered the basic points of ferromagnetism: 1. In a magnetic atom there are two contributions to the magnetic dipole moment; firstly the spin of the electrons themselves and secondly that of electrons orbiting the nucleus. 2. Ferromagnetism occurs in materials where all the magnetic dipole moments align parallel below the Curie temperature. 3. Ferromagnetic ordering was explained by Weiss via a hypothetical average field which acts to cause the parallel alignment. However, the microscopic explanation for this can be found by looking at the Pauli Exclusion Principle; it is energetically more favourable for electrons be placed in different orbitals as this reduces the Coulomb repulsion energy and allows for the alignment of the electron spins. 4. The magnitude of the magnetisation is dependent on temperature and modelled by the Curie-Weiss law: \[\chi = \frac{C}{{T - {T\_c}}} = \frac{M}{H}\;\,\,\,{\rm{equation}}\;2\;\rm{in\;text}\] 5. The formation of domains is driven by the minimisation of energy, with the main driving force often being that of the magnetostatic energy. 6. Magnetic hysteresis is seen, due to the defects found in crystals, as these hinder the movement of domain walls. 7. Hard magnets have a large coercive field, whereas soft magnets are easily demagnetised and so has a small coercive field. 8. Both magnetocrystalline anisotropy and shape anisotropy give directions in a material along which it is easier to magnetise a sample. Questions = ### Quick questions*You should be able to answer these questions without too much difficulty after studying this TLP. If not, then you should go through it again!*1. Which picture best describes ferromagnetism? | | | | | - | - | - | | | a | | | | b | | | | c | | | | d | | 2. Which 3 elements are ferromagnetic at room temperature? | | | | | - | - | - | | | a | Mn, Cr, Fe | | | b | Mg, Ti, Ni | | | c | Fe, Ni, Co | | | d | Co, Ce, Mn | 3. What happens to ferromagnetic materials above the Curie Temperature? | | | | | - | - | - | | | a | They become diamagnetic | | | b | They become gases | | | c | They become antiferromagnetic | | | d | They become paramagnetic | 4. What is the main driving force for the formation of domains? | | | | | - | - | - | | | a | Magnetostatic energy | | | b | Magnetocrystalline energy | | | c | Magnetostrictive energy | | | d | All of the above | Going further = ### Books and Papers 1. Magnetic Materials : Fundamentals and Device Applications by N.A. Spaldin, Cambridge University Press (2003) A good overall explanation of ferromagnetism which also covers other types of magnetism and some applications. 2. The Feynman Lectures on Physics by R.P. Feynman, R.B. Leighton and M. Sands, Addison-Wesley Publishing Company (1970) Provides a complete explanation of magnetism including the mathematics and qualitative explanations. 3. P.A.M. Dirac, The quantum theory of the electron, Proc. R. Soc. London A 117 610-612 (1928) 4. P.A.M. Dirac, The quantum theory of the electron Part II, Proc. R. Soc. London A 118 351-361 (1928) These are the original papers by Dirac in which the spin of the electron was derived.
Aims On completion of this TLP you should: * Understand what composites are. * Understand why composites exhibit high strength and high toughness. * Be able to calculate the elastic constants for a single lamina and for a laminate. * Be able to predict the strength of a single lamina and of a laminate. Before you start The meanings of some basic elastic constants, including stiffness, strength and toughness. The use of Ashby Materials Selection Maps ( See the TLP - ) Stress states and Mohr's circle (these are covered in the TLP - ) Tensors and matrices. Some basic knowledge of plastic deformation and brittle and ductile fracture is also useful. These are not pre-requisites and it should be possible to make use of this TLP even without knowledge of these concepts. Introduction In the search for stiff, light, durable and easily manufactured materials, there has been an ever-increasing emphasis on composites over the past 50 years. Although metals have excellent strength and toughness combinations they are quite dense and many corrode in use. Attention has then turned to ceramics and polymers, which are lighter and more corrosion-resistant, but often lack toughness. Generally, a **composite** is a combination of two materials that exhibit desired properties of both constituents, such as compressive strength of the matrix and tensile strength of the fibres. Usually, they consist of ceramic or polymer fibres embedded in a matrix, usually a thermosetting resin such as an epoxy or polyester, but can also be thermoplastics or ceramics or even metals. The fibres are strong due to the removal of flaws in the case of ceramics and molecular alignment for polymers. The three main types of fibre used are: * Glass * Carbon * Aramid (aromatic polyamides, such as *Kevlar* ). In some types of composite, the fibres are oriented randomly within a plane, while in others the material is made up of a stack of differently-oriented "plies" to form a laminate, each ply containing an aligned set of parallel fibres. The choices of composition and of the materials used as matrix and fibre are dependent on the required properties. This can be deduced by deriving a merit index for the performance required followed by the use of Ashby property maps. As is often the case with science, it is nature that hints at the good mechanical properties of composites, for example wood and bones. For thousands of years we have been using composites, whether it be in brick walls or concrete, and now, composites are widely used in automotive, aerospace, marine and sports applications. Therefore an understanding of how composite materials behave is crucial to any technological advances. Stiffness of long fibre composites Before we go into the details it is worth noting that the most suitable material for a given application may well not be the stiffest material. For example, for a given force applied to the free end of a cantilevered composite beam, the minimum deflection per unit mass is achieved by maximizing the merit index E / ρ2. Click here for . (See for more on merit indices) An Ashby property map (Young's Modulus against Density) for composites: ![An Ashby property map for Youngs modulus vs density](images/ashby_property_map.jpg) The axial and transverse Young's Moduli can be predicted using a simple slab model, in which the fibre and matrix are represented by parallel slabs of material, with thicknesses in proportion to their volume fractions, E and (1- E). ![](images/slab_model.jpg) ### Axial Loading: Voigt model The fibre strain is equal to the matrix strain: EQUAL STRAIN. ![Diagram of axial loading of composite](images/axial-loading-diagram.gif) \[{\varepsilon \_1} = {\varepsilon \_{1{\rm{f}}}} = \frac{{{\sigma \_{1{\rm{f}}}}}}{{{E\_{\rm{f}}}}} = {\varepsilon \_{1{\rm{m}}}} = \frac{{{\sigma \_{1{\rm{m}}}}}}{{{E\_{\rm{m}}}}} = \frac{{{\sigma \_1}}}{{{E\_1}}}\] See of terms For a composite in which the fibres are much stiffer than the matrix ( Ef >> Em ), the reinforcement fibre is subject to much higher stresses ( σ1f >> σ1m ) than the matrix and there is a redistribution of the load. The overall stress σ1 can be expressed in terms of the two contributions: σ1 = ( 1*- f*)σ1m*+ f σ1f* The Young's modulus of the composite can now be written as \({E\_1} = \frac{{{\sigma \_1}}}{{{\varepsilon \_1}}} = \frac{{\left( {1 - f} \right){\sigma \_{1{\rm{m}}}} + f{\sigma \_{1{\rm{f}}}}}}{{\left( {\frac{{{\sigma \_{1{\rm{f}}}}}}{{{E\_{\rm{f}}}}}} \right)}} = \left( {1 - f} \right){E\_{\rm{m}}} + f{E\_{\rm{f}}}\)   (1) This is known as the **"Rule of Mixtures"** and it shows that the axial stiffness is given by a weighted mean of the stiffnesses of the two components, depending only on the volume fraction of fibres. ### Transverse Loading: Reuss Model The stress acting on the reinforcement is equal to the stress acting on the matrix: **EQUAL STRESS**. σ2 = σ2f = ε2f Ef = σ2m = ε2mEm The net strain is the sum of the contributions from the matrix and the fibre: ε2 = f ε2f + ( 1 - f ) ε2m (5) from which the composite modulus is given by: \[{E\_2} = \frac{{{\sigma \_2}}}{{{\varepsilon \_2}}} = \frac{{{\sigma \_{2{\rm{f}}}}}}{{f{\varepsilon \_{2{\rm{f}}}} + (1 - f){\varepsilon \_{2{\rm{m}}}}}} = {\left[ {\frac{f}{{{E\_{\rm{f}}}}} + \frac{{(1 - f)}}{{{E\_{\rm{m}}}}}} \right]^{ - 1}} \;\;\;\;(6)\] This **"Inverse Rule of Mixtures"** is actually a poor approximate for E2 since in reality regions of the matrix 'in series' with the fibres, close to them and in line along the loading direction, are subjected to a high stress similar to that carried by the reinforcement fibres; whereas the regions of the matrix 'in parallel' with the fibres (adjacent laterally) are constrained to have the same strain as the fibres and carry a low stress. This leads to non-uniform distributions of stress and strain during transverse loading, which means that the model is inappropriate. The slab model provides the lower bound for the transverse stiffness. ![Diagram of transverse loading of composite](images/transverse-loading-diagram.gif) A more successful estimate is the semi-empirical Halpin-Tsai expression: \[{E\_2} = \frac{{{E\_m}(1 + \xi \eta f)}}{{(1 - \eta f)}}\;\;\;\;(3)\] where \(\eta\) = \(\frac{{(\frac{{{E\_f}}}{{{E\_m}}} - 1)}}{{(\frac{{{E\_f}}}{{{E\_m}}} + \xi )}}\)and ξ ≈ 1 An even more powerful, but complex, analytical tool is the Eshelby method (see Hull and Clyne, 1996). - Strength of long fibre composites = For a body under the application of an arbitrary stress state the three most important modes of failure are: 1. Axial tensile failure 2. Transverse tensile failure 3. Shear failure. | | | | | - | - | - | | Diagram of axial failure in composite | Diagram of transverse failure in composite | Diagram of shear failure in composite | | *axial tensile failure* | *transverse tensile failure* | *shear failure* | ### Axial Strength In the simplest scenario, it is assumed that both the matrix and the fibres deform elastically and subsequently undergo brittle fracture. The EQUAL STRAIN condition applies and there are two possible cases: a) The matrix has the lower failure ( **u**ltimate) strain ( εfu > εmu ) b) The fibre has the lower failure strain ( εfu < εmu ) Click here to see the used. **Case a):** The composite stress is given by the rule of mixtures σ1 = f σf + (1 - *f*) σm up until the strain reaches εmu . Beyond this point the matrix undergoes microcracking and the load is progressively transferred to the fibres as cracking continues. During this stage there is little increase in composite stress with increasing strain. With further crack growth, if the entire load is transferred to the fibres before fibre fracture, then the composite stress, σ1, becomes fσf and the composite failure stress, σ1u, is simply f σfu : σ1u = f σfu (4) Alternatively, if the fibres fail before the entire load is transferred onto them the composite strength is just the weighted average of the failure stress of the matrix, σmu , and the fibre stress at the onset of matrix cracking, σfmu : σ1u = f σfmu + ( 1 - f ) σmu (5) The variation of σ1u with f is shown in graph 2. ![](images/svf_graph_2.gif) **Case b):** Again, the composite stress is given by the rule of mixtures σ1 = f σf + ( 1- f ) σm up until the strain reaches εfu when the fibres fail. Beyond this point the load is progressively transferred to the matrix as the fibres fracture into shorter lengths. Assuming that the fibres bear no load when their aspect ratios are below the critical aspect ratio, s\* = σf\* / 2τi\* , which is the critical ratio of the fibre length to its diameter below which the fibre cannot undergo any further fracture, then composite failure occurs at an applied stress of ( 1 - f) σmu . σ1u = ( 1 - f) σmu (6) Alternatively, if the matrix fracture takes place while the fibres are still bearing some load, i.e. the fibre aspect ratio is more than the critical value, then the composite failure stress is the weighted average of the fibre failure stress, σfu , and the matrix stress at the onset of fibre fracture, σmfu. σ1u = f σfu + ( 1 - f) σmfu (7) The variation of σ1u with f is shown in graph 4. ![](images/svf_graph_4.gif) ### Why is this approach inaccurate? See . **Generally** , the fibre volume fractions fall in the range 30% to 70% (ie, > f ') and since it is usually the case that σmu << σfu , it is evident from graphs 2 and 4 that the fibre strength is dominant in determining the axial strength of long-fibre composites. **∴ σ****1u** **∼** **f****σ****fu** for all axial cases. ### Transverse Strength In general, the presence of fibres reduces the transverse strength and the failure strain significantly relative to the unreinforced matrix. This observed tendency is largely due to high local stresses and strains around the fibre / matrix interface due to differences in the Young's Moduli of the two components. However, the transverse strength is also influenced by many other factors and consequently, it is not possible to deduce a simple estimate of σ2u without making several approximations. One approach is to treat the fibres in the composite as a set of cylindrical holes in a simple square array. We then consider the case where the reduction in the composite cross-sectional area is maximum and this leads to the following expression for the transverse strength of a composite having a volume fraction f of fibres: ![Equation](images/equ_strength1.gif) See Transverse strength derivation ### Why is this approach inaccurate? Click here to see . ### Shear Strength Similarly here, we cannot derive a simple expression for the shear strength. There are a total of six combinations of shearing plane and direction that can be grouped into three sets of equivalent pairs, as shown in the diagrams. **Shear directions** τ21 and τ31 are unlikely to occur since these require breaking of the fibres and it is not obvious which of τ12 and τ32 is easier to happen. When considering the stresses on a thin lamina in the 1-2 plane, τ32 is zero and only τ12u is important. Finite difference methods (beyond the scope of this TLP) were used to deduce the variation of the shear stress concentration factor, Ks , with fibre volume fraction. Here, we will just take the result of this analysis, without proof, to be: τ12u = *K*s τmu Where τmu is the ultimate matrix shear stress and Ks varies as shown in graph 6. Ks is about 1 unless the fibre volume fraction is very high (> 70%). τ12u ≈ τmu (9) ![](images/graph_6_flash_stress_conc_factor.gif) Composite vaulting poles – why don´t they break? ### Stresses in a Vaulting Pole ![Man pole vaulting](images/pole_vault.jpg) Your browser does not support the video tag. Video of a pole vaulter (19 feet 0¼ inches converts to 5.798 m) A composite vaulting pole, of the type shown in the video, typically has a diameter of about 50 mm and the pole is being bent to a radius of curvature, *R* , of about 1 m. The calculations below show how the peak stress on the outer surface of a 50 % fibre pole can be estimated. ### Why are they Strong Enough? Since σ1u ~ f σfu the strength of the fibres, σfu , must therefore be at least about 2 GPa. In fact, the glass fibres used in composites, which are about 7 µm in diameter, have strengths of about 2-3 GPa. Using the Griffith criterion and assuming brittle fracture with a surface energy, *γ*, of 1 J m -2, the maximum flaw size must be less than about 10 nm. The key to their strengths is therefore the absence of large flaws. ### Why are they Tough Enough? It is not immediately clear why the material as a whole should be sufficiently tough, since both constituents are individually very brittle. A repeatedly loaded component will acquire many, relatively large, surface defects, so the fact that it can sustain such high stresses without fracturing indicates that, as a material, the composite has a high toughness. We will explore this in detail in the next section. Toughness of composites and fibre pull–out For composites to be useful they not only must be strong, but they must also be tough. The fracture energy, which is related to the fracture toughness by Equation 10, is determined by the available energy absorbing mechanisms. \[{K\_c} = {\sigma \_\*}\sqrt {\pi c} = \sqrt {E{G\_c}} (10)\] For a composite these are: * Matrix Deformation * Fibre Fracture * Interfacial Debonding and Crack Deflection * Fibre Pull-Out ### Matrix Deformation: This process is important in ductile matrices, but relatively negligible in brittle matrices. Load transfer onto the fibres reduces the matrix stress and the matrix is unable to deform freely due to the constraint imposed by the fibres. Generally, matrix deformation is unimportant for composites, even in metallic matrices. ### Fibre Fracture: Metallic and polymeric fibres can undergo plastic deformation before fracture (ductile) and this contributes up to a few kJ m -2 to the overall fracture energy, whereas brittle fibre fracture only provides a few tens of J m -2 . It is important to realize that fibre fracture need not occur in the crack plane. There is a variation in flaw sizes along the fibre and, as a result, there is a variation in strength along the fibre, which can be described by the . Typically, fibre fracture in composites makes little contribution to the overall toughness. ### Interfacial Debonding and Crack Deflection: Click here for . Gcd = f s  Gic A composite made from brittle constituents can have a surprisingly high toughness if a crack is repeatedly deflected at fibre/matrix interfaces. The work of debonding per unit crack area, Gcd , itself is relatively small, but it allows fibre pull-out, which can potentially contribute significantly to the toughness, to occur. Example: s = 50, f = 0.5, Gic = 10 J m -2 → G cd = 0.25 kJ m -2 ### iv) Fibre Pull-out: The main energy-absorbing mechanism raising the toughness of fibre composites is the pulling of fibres out of their sockets in the matrix during crack advance. This is allowed to take place after interfacial debonding and fibre fracture away from the crack plane have occurred beforehand. The pull-out work per unit crack area is given by: Click here for . Gcp = 4 f s2 r τi\* Example: s = 50, f = 0.5, r = 10 μm, τi\* = 20 Mpa →  Gcp = 1.0 MJ m-2 It is because of these mechanisms that composites exhibit R-curve behaviour. Off–axis loading of a lamina Under off-axis loading of a single lamina the applied stress state can be resolved to give the stresses along the laminar principal axes. The stress state of a body can be described by a stress tensor, shown schematically below, which can be related to the strain tensor by the equation: \[{\sigma \_{ij}} = {C\_{ijkl}}{e\_{kl}}\;\;\;(11)\]where Cijkl is a fourth rank stiffness tensor containing 81 components. Equation 11 can be rearranged to give: \[{e\_{ij}} = {S\_{ijkl}}{\sigma \_{kl}}\;\;\;(12)\] ![Equation for stress tensor](images/stress_tensor.gif) where Sijkl is the compliance tensor. If the body is in equilibrium both the stress tensor and the stiffness tensor must be symmetric about the diagonal. Then writing Equation 11 as a matrix equation (See Nye 1985) and taking into account the symmetry of the composite itself, the number of independent terms in Cpq reduces to a reasonably small number. A few examples are shown here: ![Matrix maths](images/sym_materials.gif) ### Resolving Stresses within a Lamina For a single lamina it is reasonable to assume that all the stresses acting are in the laminar plane, so that σ3 = τ23 = τ31 = 0. Assuming orthotropic symmetry (likely for a lamina) equation 12 becomes \[\left[ {\begin{array}{\*{20}{c}} {{\varepsilon \_1}}\\ {{\varepsilon \_2}}\\ {{\gamma \_{12}}} \end{array}} \right] = {\left[ S \right]^{}}\left[ {\begin{array}{\*{20}{c}} {{\sigma \_1}}\\ {{\sigma \_2}}\\ {{\tau \_{12}}} \end{array}} \right] = {\left[ {\begin{array}{\*{20}{c}} {{S\_{11}}}&{{S\_{12}}}&0\\ {{S\_{21}}}&{{S\_{22}}}&0\\ 0&0&{{S\_{66}}} \end{array}} \right]^{}}\left[ {\begin{array}{\*{20}{c}} {{\sigma \_1}}\\ {{\sigma \_2}}\\ {{\tau \_{12}}} \end{array}} \right]\;\;\rm{(13)}\] when stresses are applied along the principal axes of the lamina. Clearly, when σ2 = τ12 = 0. \({S\_{11}}\) = \(\frac{1}{{{E\_1}}}\) Similar considerations give \[{S\_{22}} = \frac{1}{{{E\_2}}},\;\;{S\_{66}} = \frac{1}{{{G\_{12}}}},\;\;{S\_{12}} = \frac{{ - {\nu \_{12}}}}{{{E\_1}}} = \frac{{ - {\nu \_{21}}}}{{{E\_2}}}\] where \[{\nu \_{21}} = \left[ {{f^{}}{\nu \_f} + {{(1 - f)}^{}}{\nu \_m}} \right]\frac{{{E\_2}}}{{{E\_1}}}\] and ν12*=* [ f νf + (1 - f ) νm] Click here for derivation of . We can now find elastic constants for a lamina whose fibres are at an angle θ to the loading direction by the following resolving procedure: \[\left[ {\begin{array}{\*{20}{c}} {{\varepsilon \_x}}\\ {{\varepsilon \_y}}\\ {{\gamma \_{xy}}} \end{array}} \right] = {\left[ {\overline S } \right]^{}}\left[ {\begin{array}{\*{20}{c}} {{\sigma \_x}}\\ {{\sigma \_y}}\\ {{\tau \_{xy}}} \end{array}} \right]\;\;(15)\]The result is that under an arbitrary planar loading system, the transformed compliance tensor replaces the compliance tensor in equation 13. Similar to before, \[{E\_x} = \frac{1}{{{{\overline S }\_{22}}}},\;\;{E\_y} = \frac{1}{{{{\overline S }\_{22}}}},\;\;{G\_{xy}} = \frac{1}{{{{\overline S }\_{66}}}},\;\;{\nu \_{xy}} = - {E\_x}{\overline S \_{12}},\;\;{\nu \_{yx}} = - {E\_y}{\overline S \_{12}}\] Stiffness of laminates Aligned composites are very stiff along the fibre axis, but also very compliant in the transverse direction. Many applications equal stiffness in all directions within a plane and the solution to this is to stack together and bond plies with different fibre directions. Having established the off-axis elastic constants for a single thin ply, we will now establish those for a stack of plies, a laminate. In this case there is a further complication in that while the fibres in each ply make an angle Φ with the reference x-y axes, the elastic constants of the laminate as a whole will also be affected by the angle Φ that the loading direction makes with the same reference x-y axes. The corresponding stress - strain relationship for a laminate is: \[\left[ {\begin{array}{\*{20}{c}} {{\varepsilon \_x}}\\ {{\varepsilon \_y}}\\ {{\gamma \_{xy}}} \end{array}} \right] = {\left[ {{{\overline S }\_g}} \right]^{}}\left[ {\begin{array}{\*{20}{c}} {{\sigma \_x}}\\ {{\sigma \_y}}\\ {{\tau \_{xy}}} \end{array}} \right] \;\;\;(16)\] where the subscript g refers to global. The average stress in the x-direction of the loading system is given by: \[{\sigma \_{x{,^{}}g}} = \frac{{\sum\limits\_{k = 1}^n {({\sigma \_{x{,^{}}k}}^{}{t\_k})} }}{{\sum\limits\_{k = 1}^n {{t\_k}} }} = {\overline C \_{11{,^{}}g}}^{}{\varepsilon \_{x{,^{}}g}} + {\overline C \_{12{,^{}}g}}^{}{\varepsilon \_{y{,^{}}g}} + {\overline C \_{16{,^{}}g}}^{}{\gamma \_{xy{,^{}}g}}\] (17) where tk is the thickness of the kth ply. This is simply an expansion of equation 16. And since the in-plane strains are the same for each ply, the stress in the kth ply can be written as: \[{\sigma \_{x{,^{}}g}} = {\overline C \_{11{,^{}}g}}^{}{\varepsilon \_{x{,^{}}g}} + {\overline C \_{12{,^{}}g}}^{}{\varepsilon \_{y{,^{}}g}} + {\overline C \_{16{,^{}}g}}^{}{\gamma \_{xy{,^{}}g}}\] This is an expansion of equation 15 for the kth ply. Substituting this into equation 17 and equating the coefficients of ε*x,g* , we have: \[{\overline C \_{11,g}} = \frac{{\sum\limits\_{k = 1}^n {({{\overline C }\_{11,g}}^{}{t\_k})} }}{{\sum\limits\_{k = 1}^n {{t\_k}} }}\] In this way the other components of the stiffness tensor for the laminate can also be found. The inverse of the stiffness tensor, the compliance tensor, is often obtained because its relationships with the elastic constants are simpler. Like before, \[{E\_x} = \frac{1}{{{{\overline S }\_{11{,^{}}g}}}},\;\;\;{G\_{xy}} = \frac{1}{{{{\overline S }\_{66{,^{}}g}}}},\;\;\;{\nu \_{xy}} = - {E\_x}{\overline S \_{12{,^{}}g}}^{}\] Clearly the laminate will exhibit different elastic constants if the loading system were applied at an arbitrary angle, Φ, to the x-y coordinate system. Try constructing your own laminate, using the model below, and calculate the elastic constants for different loading angles. Tensile–shear interactions and balanced laminates = <! function openwindow(theURL,winName,features) { //v2.0 window.open(theURL,winName,features); } //> From equation 15, the 'interaction' terms S16 and S26 are both non-zero and this indicates that, under off-axis loading, normal stresses produce shear strains (as well as normal strains) and shear stresses produce normal strains (as well as shear strains). This tensile-shear interaction is also present in laminates, but does not occur if the loading system is applied along the principal axes of a single isolated lamina, in which case S16 = S26 = 0 as in equation 13. \[{\eta \_{xyx}} = {E\_x}{\overline S \_{16}}\;\; and\;\;{\eta \_{xyy}} = {E\_y}{\overline S \_{26}}\] The extent of this tensile-shear interaction is quantified by the parameters and (Click to open pop-up) ### Balanced laminates Tensile-shear interactions are undesirable as they lead to distortions and local microstructural damage and failure. A laminate whose interaction ratios are zero is said to be **'balanced'** . Use the model below to investigate the variation of ηxyx with loading angle. Out–of–plane stresses and symmetric laminates = So far we have only considered in-plane stresses within a given ply and within the laminate, but when plies are bonded together there are also through-thickness (or coupling) stresses, which are difficult to quantify, between the plies. These arise from the differences in the Poisson contractions of each individual ply and can lead to microstructural damage and distortions of the laminate. Consider the simple case of a uniaxially loaded cross-ply laminate, shown in the animations below. The strains of the two plies parallel to the loading direction are equal, but the Poisson contraction, ν12 , for the parallel ply is greater than that for the transverse ply, ν21 (See page on ). The result of this is that there are coupling stresses that try to deform the flat laminate into the shape shown in animation iii). Extending this to a random stacking sequence, it is clear that the Poisson ratio for a single ply will vary with the fibre direction (See model in ) and therefore, coupling stresses are present in a laminate if the plies have different constituents or if the laminae are not aligned. ### Symmetric laminates If the laminate has a mirror plane in the laminar plane, i.e. the laminate is symmetric, the coupling forces cancel out and the distortions are significantly reduced, even though local coupling stresses still exist. For these reasons **balanced symmetric** stacking sequences are preferred, but this might not always be possible when performance requirements are taken into account. Failure of laminates and the Tsai–Hill criterion .wrap { width: 1200px; height: 580px; overflow: hidden; } #f1 { width: 90% !important; height: 90% !important; -webkit-transform: scale(0.9); transform: scale(0.9); -webkit-transform-origin: 0 0; transform-origin: 0 0; } Similar to the discussions in the page on , the failure of laminae can be understood by the same three failure modes: axial, transverse and shear. A number of failure criteria have been proposed for separate plies subjected to **in-plane stress states** , with the assumption that coupling stresses are not present. We will introduce two here. ### Maximum Stress Criterion This assumes no interaction between the modes of failure, i.e. the critical stress for one mode is unaffected by the stresses tending to cause the other modes. Failure then occurs when one of these critical values, σ1u , σ2u and τ12u , is reached. These values refer to the laminar principal axes and can be resolved from the applied stress system by using the equation \[\left[ {\begin{array}{\*{20}{c}} {{\sigma \_1}}\\ {{\sigma \_2}}\\ {{\tau \_{12}}} \end{array}} \right] = {\left[ T \right]^{}}\left[ {\begin{array}{\*{20}{c}} {{\sigma \_x}}\\ {{\sigma \_y}}\\ {{\tau \_{xy}}} \end{array}} \right]\] where \[\left[ T \right] = \left[ {\begin{array}{\*{20}{c}} {{{\cos }^2}\theta }&{{{\sin }^2}\theta }&{2\cos \theta \sin \theta }\\ {{{\sin }^2}\theta }&{{{\cos }^2}\theta }&{ - 2\cos \theta \sin \theta }\\ { - \cos \theta \sin \theta }&{\cos \theta \sin \theta }&{{{\cos }^2}\theta - {{\sin }^2}\theta } \end{array}} \right]\] ( *See page on* ) It follows that under an applied **uniaxial tension** ( σy = τxy = 0) the critical values of σx for each failure mode are: \[{\sigma \_{xu}} = \frac{{{\sigma \_{1u}}}}{{{{\cos }^2}\theta }},\;\;\;{\sigma \_{xu}} = \frac{{{\sigma \_{2u}}}}{{{{\sin }^2}\theta }},\;\;\;{\sigma \_{xu}} = \frac{{{\tau \_{12u}}}}{{\sin \theta \cos \theta }}.\] ### Tsai-Hill Criterion Other treatments that take into account the interactions between failure modes are mostly based on modifications of yield criteria for metals (See TLP on ). The most important of these is the Tsai-Hill Criterion, which is an adaptation of the von Mises Criterion. von Mises Criterion for Metals: ( σ1 - σ2 )2 + ( σ2 - σ3 )2 + ( σ3 - σ1 )2 = 2 σY2 where σY is the metal yield stress. For in-plane stress states ( σ3 = 0) this reduces to \[{\left( {\frac{{{\sigma \_1}}}{{{\sigma \_Y}}}} \right)^2} + {\left( {\frac{{\sigma {}\_2}}{{{\sigma \_Y}}}} \right)^2} - \frac{{{\sigma \_1}{\sigma \_2}}}{{{\sigma ^2}\_Y}} = 1\] This is then modified to take into account the anisotropy of composites and the different failure mechanisms to give the following expression. \[{\left( {\frac{{{\sigma \_1}}}{{{\sigma \_{1Y}}}}} \right)^2} + {\left( {\frac{{\sigma {}\_2}}{{{\sigma \_{2Y}}}}} \right)^2} - \frac{{{\sigma \_1}{\sigma \_2}}}{{{\sigma ^2}\_{1Y}}} - \frac{{{\sigma \_1}{\sigma \_2}}}{{{\sigma ^2}\_{2Y}}} + \frac{{{\sigma \_1}{\sigma \_2}}}{{{\sigma ^2}\_{3Y}}} + {\left( {\frac{{{\tau \_{12}}}}{{{\tau \_{12Y}}}}} \right)^2} = 1\] The metal yield stresses can be regarded as composite failure stresses and since composites are transversely isotropic ( σ2u = σ3u ) we arrive at the **Tsai-Hill Criterion** for composites. \[{\left( {\frac{{{\sigma \_1}}}{{{\sigma \_{1u}}}}} \right)^2} + {\left( {\frac{{\sigma {}\_2}}{{{\sigma \_{2u}}}}} \right)^2} - \frac{{{\sigma \_1}{\sigma \_2}}}{{{\sigma ^2}\_{1u}}} + {\left( {\frac{{{\tau \_{12}}}}{{{\tau \_{12u}}}}} \right)^2} = 1\] Below, the Maximum Stress and the Tsai-Hill criteria are used to predict the dependence on the loading angle of the tensile stress required to cause failure of a single lamina. ### Failure of Laminates The above treatments only apply to single isolated plies. So, in order to extend this to laminates, we must obtain the in-plane stresses in each ply of a laminate subjected to an arbitrary in-plane stress state. From Equation 13 the stress tensor for the kth ply is related to the strain tensor by: \[\left[ {\begin{array}{\*{20}{c}} {{\sigma \_{1k}}}\\ {{\sigma \_{2k}}}\\ {{\tau \_{12k}}} \end{array}} \right] = {\left[ C \right]\_k}^{}\left[ {\begin{array}{\*{20}{c}} {{\varepsilon \_{1k}}}\\ {{\varepsilon \_{2k}}}\\ {{\gamma \_{12k}}} \end{array}} \right]\] The strain tensor of the kth ply can be resolved from the strain tensor of the laminate by using Equation 14: \[\left[ {\begin{array}{\*{20}{c}} {{\varepsilon \_{1k}}}\\ {{\varepsilon \_{2k}}}\\ {{\gamma \_{12k}}} \end{array}} \right] = {\left[ {T'} \right]\_k}^{}\left[ {\begin{array}{\*{20}{c}} {{\varepsilon \_x}}\\ {{\varepsilon \_y}}\\ {{\gamma \_{xy}}} \end{array}} \right]\] Now, from Equation 16, the laminate strain tensor is related to the laminate stress tensor by: \[\left[ {\begin{array}{\*{20}{c}} {{\varepsilon \_x}}\\ {{\varepsilon \_y}}\\ {{\gamma \_{xy}}} \end{array}} \right] = {\left[ {{{\overline S }\_L}} \right]^{}}\left[ {\begin{array}{\*{20}{c}} {{\sigma \_x}}\\ {{\sigma \_y}}\\ {{\tau \_{xy}}} \end{array}} \right]\] Combining these three equations gives: \[\left[ {\begin{array}{\*{20}{c}} {{\sigma \_{1k}}}\\ {{\sigma \_{2k}}}\\ {{\tau \_{12k}}} \end{array}} \right] = {\left[ C \right]\_k}^{}{\left[ {T'} \right]\_k}{\left[ {{{\overline S }\_L}} \right]^{}}\left[ {\begin{array}{\*{20}{c}} {{\sigma \_x}}\\ {{\sigma \_y}}\\ {{\tau \_{xy}}} \end{array}} \right]\] An appropriate failure criterion is then applied and the onset of laminate failure is taken to be the point at which one of the plies fail. Note that the Maximum Stress Criterion suggests possible modes of failure whereas the Tsai-Hill criterion does not. Summary = In this TLP we have looked at the approaches taken to go about deriving the in-plane, on-axis elastic constants, in particular the axial and transverse **Young's Moduli** and the **Poisson's ratios** , for a composite. From these, we went onto consider the **strengths** and **failure modes** of fibre-aligned composites and explore the question as to why they exhibit high strength and **toughness** , even though the constituent materials tend to be brittle. With this basic understanding we were able to extend our consideration to **laminates** , which have a possible advantage of being isotropic, and we have seen how to calculate the elastic constants for an arbitrary in-plane stress state. Off-axis loading presents us with the problem of tensile-shear interactions and coupling stresses in laminates, as a result of which **balanced symmetric** laminates are preferred. Then in the last section we applied the **Maximum Stress Criterion** and the **Tsai-Hill Criterion** to predict the required stress state for laminate failure. The greatest advantage of composite materials is strength and stiffness combined with lightness and durability; and these properties are the reasons why composites are used in a wide variety of applications. Questions = ### Quick questions*You should be able to answer these questions without too much difficulty after studying this TLP. If not, then you should go through it again!*1. In mechanical loading experiments, involving large stresses and long durations, to measure the transverse stiffness of a composite, the experimental values are sometimes lower than the Halpin-Tsai prediction (some even lower than the Equal stress calculation.) Why might this be? | | | | | - | - | - | | | a | The matrix deforms plastically during loading. | | | b | The stress distribution is uniform within the composite. | | | c | The strain distribution is uniform within the composite. | | | d | The matrix stiffness is greater than that for fibre. | 2. How would you determine the total energy absorbed during fracture of a composite from its stress strain curve? (multiple choice) | | | | | - | - | - | | | a | The area under the plot | | | b | The area under the plot times the volume of the composite | | | c | The area under the plot times the mass of the composite | | | d | The gradient of the linear region of the curve. | 3. What is the most significant energy absorbing mechanism during composite failure? | | | | | - | - | - | | | a | Matrix deformation (this occurs to an even smaller extent than in unreinforced matrices) | | | b | Fibre Fracture (some plastic deformation for ductile fibres, but this is a small contribution to the overall toughness.) | | | c | Crack deflection and interfacial debonding (this is a small contribution, but it allows Fibre pull-out to occur) | | | d | Fibre Pull-out | 4. What is the combined work done per unit crack area required for crack deflection and fibre pull-out in a 60 % long-fibre composite? (Data: τi\* = 40 MPa, Gic = 8 J m-2 , fibre radius r = 7 μm, pull-out length x0 = 840 μm.)| | | | | - | - | - | | | a | 1.1 MJ m-2 | | | b | 1.8 MJ m-2 | | | c | 2.4 MJ m-2 | | | d | 3.2 MJ m-2 |### Deeper questions*The following questions require some thought and reaching the answer may require you to think beyond the contents of this TLP.*5. For a laminate made up of 50% volume fraction carbon HS fibres and Nylon 6,6 matrix with a stacking sequence of 0 / 15 /50 / 55 / 60, at what loading angle is the Poisson contraction the minimum? | | | | | - | - | - | | | a | 10 | | | b | 30 | | | c | 50 | | | d | 90 | 6. How would you describe a laminate, composed of two constituents, with a stacking sequence of 0/45/80/45/0 subjected to a uniaxial tensile stress at a loading angle of 20 degrees? | | | | | - | - | - | | | a | Balanced Symmetric | | | b | Balanced Asymmetric | | | c | Unbalanced Symmetric | | | d | Unbalanced Asymmetric | 7. What is the axial stiffness of a long-fibre composite composed of glass fibres arranged in a hexagonal array in an epoxy matrix ? (Data: Glass fibre: Ef = 76 , fibre radius = 3.9 μm, spacing between centres of adjacent fibres = 8 μm Epoxy: Em = 5 GPa.) | | | | | - | - | - | | | a | 13 GPa | | | b | 21 GPa | | | c | 54 GPa | | | d | 67 GPa | 8. Calculate the axial failure stress for a composite composed of 30% borosilicate glass matrix and 70 % kevlar fibre, assuming that if one of the components fails the entire applied load is transferred to the other component.Data: Kevlar fibre: σfu = 3.0 GPa, Ef = 130 GPa. Borosilicate glass matrix: σmu = 0.10GPa, Em = 64 GPaWhat further assumptions do you need to make? | | | | | - | - | - | | | a | 0.10 GPa | | | b | 1.7 GPa | | | c | 2.1 GPa | | | d | <3 GPa | Going further = ### Books Hull D. and Clyne T.W. *An Introduction to Composite Materials*, CUP 1996 Clyne T.W. and Withers P.J. *An Introduction to Metal Matrix Composites Materials*, CUP, 1993 Piggott M.R. *Load Bearing Fibre Composites*, Pergamon Press 1980 Chawla K.K. *Ceramic Matrix Composites*, Chapman and Hall 1993 Chou T.W. *Microstructural Design of Fibre Composites*, CUP, 1992 Harris B. *Engineering Composite Materials*, Institute of Metals 1986 Kelly A. *Concise Encyclopedia of Composite Materials*, Pergamon Press 1994 Ashby M.F. *Materials Selection in Mechanical Design*, Pergamon Press, 2nd ed. 1999 Nye J. F. *Physical Properties of Crystals-Their representation by Tensors and Matrices*. Clarendon: Oxford, 1985 ### Websites
Aims On completion of this TLP, the student should gain an understanding of the following topics of discussion: * The principle behind fuel cells and what constitutes a fuel cell. * A brief history of the technology. * Function and specifics of four main types of fuel cell: + Solid oxide fuel cell (SOFC) + Molten carbonate fuel cell (MCFC) + Proton exchange membrane/ polymer electrolyte membrane fuel cell (PEMFC) + Direct methanol fuel cell (DMFC) * The advantages, problems and applications associated with each of the different electrolytes commonly used in fuel cells, including discussion of the factors influencing their suitability. * The electrochemical explanation of the fuel cell processes. * The requirements of fuelling a fuel cell system including “balance of plant” considerations, reforming and ways of storing the fuel. * Building a simple fuel cell.   Before you start * You should be familiar with the principles of electrochemical reactions * You should understand the principles of thermodynamics   Introduction <! function MM\_openBrWindow(theURL,winName,features) { //v2.0 window.open(theURL,winName,features); } //> Conventional power plants convert chemical energy into electrical energy in three steps: * Production of heat by burning fuel * Conversion of heat into mechanical energy * Conversion of mechanical energy into electrical energy The efficiency of the second step is limited (by the Second Law of Thermodynamics) to the , since the conversion of heat into mechanical energy occurs in a closed-cycle heat engine. An efficiency of about 41% can be reached by modern systems. A fuel cell is an electrochemical device that converts the chemical energy in fuels (e.g. hydrogen, methane, butane or even gasoline and diesel) into electrical energy. It exploits the natural tendency of oxygen and hydrogen to react to form water. The direct reaction is prevented by the electrolyte, which separates the two reactants. Therefore two half-reactions occur at the electrodes: | | | - | | * Anode: Fuel (e.g. H2, CO, CH4) is oxidised * Cathode: Oxygen is reduced | The ions are transported to the other electrode through the electrolyte. The fuel cell contains no moving parts and only four active elements: cathode, anode, electrolyte and interconnect; it is a simple and robust system. Fuel cells have a number of advantages compared to conventional electricity generation: * Negligible air pollution (if fossil fuels are used, otherwise none) * Reduced weight, especially in mobile applications * 100% theoretical efficiency, 80% efficiency in high temperature turbine hybrid systems, that can use the generated heat High efficiency in low power systems * Constant efficiency at low load * Flexible output with fast adjustment * Low maintenance cost and very few moving parts (or none) * Quiet or completely silent Fuel cells have many interesting applications. This short video shows a demonstration fuel cell car. Note that hydrogen and oxygen being used up by the reactions.   The principle = Oxygen and hydrogen, when mixed together in the presence of enough activation energy have a natural tendency to react and form water, because the Gibbs free energy of H2O is smaller than that of the sum of H2 and ½O2 added together (Hence, we dont smoke our pipes on Zeppelins!). If hydrogen and oxygen were combined directly, we would see combustion: | | | - | | H2 + ½O2 → H2O | Combustion involves the direct reaction of H2 gas with O2. The hydrogen donates electrons to the oxygen . We say that the oxygen has been reduced and the fuel oxidised. This combustion reaction releases heat energy. The fuel cell separates hydrogen and oxygen with a gas-impermeable electrolyte through which only ions (e.g. H+, O2-, CO32–) can migrate. Hence two half reactions occur at the two electrodes. The type of reactions at the electrodes is determined by the type of electrolyte. is one of the simplest examples. | | | - | | The half-reaction at the anode: H2 → 2H+ + 2e– The half-reaction at the cathode: O2 + 4e–+ 4H+ → 2H2O The net reaction is the combustion reaction: H2 + ½O2 → H2O | **Activation polarization** is caused by the energy intensive activity of the making and breaking of chemical bonds: At the anode, the hydrogen molecules enter the reaction sites so that they are broken into ions and electrons. The resulting ions form bonds with the catalyst atoms and the electrons remain in the vicinity until new hydrogen molecules start bonding with the catalyst, breaking the bond between the earlier ion. The electrons migrate through the bipolar plate if the bonding energy of the ion is low enough and the ions diffuse through the electrolyte. A similar process occurs at the cathode: Oxygen molecules are broken up and react with the electrons from the anode and the protons that diffused through the electrolyte to form water. Water is then ejected as a waste product and the fuel cell runs (can supply a current), as long as fuel and oxygen is provided. The exact reactions at the electrodes depend upon which species can be transported across the electrolyte. Fuel cells are classified according to the type of electrolyte (see ). The most common electrolytes are permeable for protons and the reactions are as discussed above. The second most common electrolytes, found in solid oxide fuel cells (SOFCs), are permeable for oxide ions and the following half-reactions occur: | | | - | | The half-reaction at the anode: H2 + O2– → H2O + 2e– The half-reaction at the cathode: O2 + 4e– → 2O2– The net reaction is the same as before: H2 + ½O2 → H2O |   A third type of electrolyte, used for molten carbonate fuel cells at high temperatures conducts carbide ions (CO32–): | | | - | | The half-reaction at the anode: H2 + CO32– → H2O + CO2 + 2e– The half-reaction at the cathode: ½O2 + CO2 + 2e– → CO32– The net reaction is the combustion reaction: H2 + ½O2 → H2O |   We also commonly see alkaline electrolytes, across which OH– is the transported species. In this case the half-reactions would be: | | | - | | The half-reaction at the anode: H2 + 2OH– → 2H2O + 2e– The half-reaction at the cathode: O2 + 4e– + 2H2O → 4OH– The net reaction is the combustion reaction: H2 + ½O2 → H2O |   ![](../images/divider400.jpg) #### ![](../images/divider400.jpg)   History of the technology =<! .style1 { font-size: small; color: #333333; } >The fuel cell concept was first demonstrated by William R. Grove, a British physicist, in 1839. The cell he demonstrated was very simple, probably resembling this: ![](figures/electrolysis_sml.png)    Electrolysis setup By application of a voltage across the two electrodes, hydrogen and oxygen could be extracted (the process is called **electrolysis**) and captured as shown (William Nicholson first discovered this in 1800). The fuel cell, or “gas battery” as it was first known, is the reverse of this process. In the presence of platinum electrodes, which are necessary as catalysts, the electrolysis will essentially run in reverse and current can be made to flow through a circuit between the two electrodes. Nobody tried to make use of the concept demonstrated by William R Grove until 1889 when Langer and Mond tried to engineer a practical cell fuelled by coal gas. Further early attempts carried on into the early 1900s but the development of the internal combustion engine made further research into the technology sadly unnecessary. Francis Bacon developed the first successful fuel cell in 1932, running on pure O2 and H2 and using an alkaline catalyst and nickel electrodes. It was not until 1959 that Bacon and his colleagues first demonstrated a 5 kW device; the 27 year delay is perhaps an indication of just how difficult it is to make progress in this field of development. Harry Karl Ihrig demonstrated a 20 bhp fuel cell tractor in the same year. Around about this time, NASA started researching the technology with a view to produce a compact electricity generator for use on spacecraft. Due to their astronomical budget, it was not long before they got the job done. The Gemini program used early PEM fuel cells (PEMFCs) in its later missions, and the Apollo program employed alkaline fuel cells. On a spacecraft the water produced by the reaction was available for the spacemen to drink. NASA continued to use alkaline cells in the space shuttle until the 90s when PEMFC development meant a switch back to PEMs was considered a possibility, however, the high cost of design, development, test and evaluation prevented the switch, in spite of several technical advantages.![](figures/gemini.png) PEM fuel cells being installed in a Gemini 7 spacecraft (Source: Smithsonian Institution, from the Science Service Historical Images Collection, courtesy of General Electric) ![](figures/space shuttle AFC.jpg) The alkaline fuel cell system as used on the space shuttles. Three such modules were installed in each shuttle Recent developments are thick and fast as the technology begins to come to fruition. Automotive applications are high on the agenda due to the huge consumer market and the need for an environmentally friendly, renewable alternative to the internal combustion engine and fossil fuels. ![](figures/fuel_cell_auto.png) Honda fuel cell car   Types of fuel cells =<! .style1 {font-size: larger} .style2 {color: #0033FF} .style3 { font-size: small; color: #333333; } >Fuel cells are categorised according to their type of electrolyte, since it is the property-determining component. The six main types of fuel cells are outlined below. | | | | | | | | | - | - | - | - | - | - | - | | Fuel cell type | **DMFC** | | **AFC** | **PAFC** | | | | Electrolyte type | Polymeric ion exchange membrane | Polymeric ion exchange membrane | Immobilised alkaline salt solution | Immobilised liquid phosphoric acid | Immobilised liquid molten carbonate | Ceramic | | Operating temperature (°C) | 20 – 90 | 30 – 100 | 50 – 200 | ~220 | ~650 | 500 – 1000 | | Charge carrier | H+ | H+ | OH– | H+ | H32– | O2– | | Power range (W) | 1 – 100 | 1 – 100k | 500 – 10k | 10k – 1M | 100k – 10M+ | 1k – 10M+ | | ***Applications and main advantages:*** | | Portable electronics | Higher energy density than batteries and faster recharge. |   |   |   |   | | Cars boats and spaceships |   | Zero emissions and higher efficiency. |   |   |   | | Domestic CHP |   |   | Efficiency and reliability |   |   | | Distributed power generation, CHP, busses |   |   |   |   | Efficiency, emissions and less noise | | Able to internally reform CH4 (see Fuelling section) | × | × | × | × | √ | √ | The phosphoric acid fuel cell (PAFC) is not covered in detail in this package. It was however the first type of fuel cell to be commercially produced and enjoys widespread terrestrial use. Many 200 kW systems are in place in USA and Europe. This package also doesnt cover the alkaline fuel cell (AFC) in any detail. These particular cells use potassium hydroxide solution as the electrolyte. This means that any CO2 at the cathode, even the levels present in the air, will react with the OH– in the solution to produce carbonates and prevent the cell functioning. This isnt a huge problem in spacecraft, where pure oxygen can be supplied to the cathode reliably, but this characteristic flaw makes the AFC unsuitable for practical terrestrial use. The diagram below shows the mechanisms by which the different fuel cell types operate: ![](figures/fuelcell_types_sml.png)   ![](../images/divider400.jpg) #### ![](../images/divider400.jpg)   #### Whats the catch? As fossil fuel resources become more and more pressed upon to deliver the worlds energy needs, as CO2 and global warming loom ever nearer and as cities become ever increasingly crowded with polluting automobiles the fuel cell seems to offer a golden solution to the world's energy problems. Its efficient, its clean, hydrogen can be produced by renewable energy and the technology wouldnt require any huge change in our way of life. So why dont we all drive fuel cell cars already? The technology has two fundamental flaws: * Slow reaction rate, leading to low currents and power. * Hydrogen is not a readily available, or easily stored fuel. Well discuss ways of getting around these problems in the package. Each type of fuel cell has a different solution, but also brings its own set of difficulties.   ![](../images/divider400.jpg) #### Temperature differences High temperature cells (solid oxide and molten carbonate electrolytes) operate by very different mechanisms to low temperature cells, and have different applications accordingly. The requirements of the “balance of plant” (i.e. the additional fuel processing equipment necessary to fuel a fuel cell) are also different. We therefore split the TLP in two and consider high temperature cells separately from low temperature cells.   Solid oxide fuel cells (SOFCs) <! .style1 {color: #0033FF} .style3 {font-size: small; color: #333333; } >#### High temperature cells In the late nineteenth century, conduction was not yet understood. Later, Nernst observed at the University of Göttingen, that stabilized zirconia (ZrO2 doped with Ca, Mg, Y) was an insulator at room temperature, an ionic conductor from 600–1000 °C and a mixed conductor (both electronic and ionic) at around 1500 °C. The main part of the solid oxide fuel cell was therefore discovered. The fuel cell concept was demonstrated by Baur and Preis in the 1930s using zirconium oxide, but many improvements were necessary to make a competitive device. In the 1950s, simple, straightforward design made cheaper manufacturing processes possible: the flat plate fuel cell. ![](figures/flat_plate_sofc_sml.png)Flat plate solid oxide fuel cell There are a few problems with the flat plate design when used for larger devices: sealing, around the edges, thermal expansion mismatch and cracking (intrinsically brittle ceramics are used). Tubular designs have been developed to solve these problems (see animation below). SOFCs are the most efficient devices yet invented, that can convert chemical energy into electrical energy. Both electrodes (cathode and anode) and the electrolyte are made of ceramic materials, since the high operating temperature prevent the use of cheaper metals. The big advantage of the SOFC over the MCFC is that the electrolyte is solid and there are no pumps required to circulate the hot electrolyte. The anode contains nickel, for better electron conduction and catalysis. The operating temperature is between 600 and 1000 °C, depending on the generation of the fuel cell (first, second and third, with decreasing operating temperature). However, thermal cycling can cause cracking of the brittle ceramic components. Both hydrogen and carbon monoxide serve as fuels. Common hydrocarbon fuels can be used in SOFC (diesel, natural gas, gasoline, alcohol etc).   ![](figures/sofc_schematic_sml.png)Operation of a SOFC The operation of the solid oxide fuel cell is straightforward: oxygen atoms are reduced on the porous cathode surface by electrons. The oxide ions diffuse through the electrolyte to the fuel rich and porous anode, where they react with the fuel (hydrogen) and give off electrons to an external circuit. A large amount of heat is produced by the electrochemical reaction, which can be used by an integrated heat management system. Since it takes a long time to reach its operating temperature, the best applications for SOFCs are ones that use both the heat and electricity generated: stationary power plants, auxiliary power supplies. Start-up time problems could be solved by using supercapacitor batteries for the first few minutes of operation in mobile applications.   • Electrolyte =<! .style2 { font-size: small; color: #333333; } .style4 { font-family: "Times New Roman", Times, serif; font-style: italic; } .style7 {color: #0033FF} .style8 {font-family: Symbol} .style9 {font-size: small} >This section of the TLP describes the fundamental properties of the most common materials used as electrolytes in SOFCs. There are several criteria that the electrolyte has to meet. It must be: * Dense and leak tight * Stable in reducing and oxidising environments * A good ionic conductor at operating temperatures * Non-electron conductor * Thin to reduce ionic resistance * Extended in area for maximum current capacity * Thermal shock resistant * Economically processable The materials used, are solid, ion-conducting ceramics. There are two main groups of such ion conductors: fluorite structured and perovskite structured, besides new materials such as hexagonal structured oxides. The three most common electrolyte materials are: doped ceria (CeO2), doped lanthanum gallate (LaGaO3) (both are oxygen ion conductors) and doped barium zirconate (BaZrO3) (a proton conductor). The concentration and type (ionic radius) of the dopants influence the material properties strongly. Dopants, that cause the least strain and hence the least influence on the potential energy landscape of the parent lattice, have the biggest effect on the conductivity. In an oxide ion conductor, current flows by the movement of oxide ions through the crystal lattice. This is a thermally activated process, where the ions hop from one lattice site to the other (from one potential valley to the other) in a random way. When an electric field is applied, there is a drift in one direction superimposed on the random thermal motion. ![](figures/potential_energy_sml.png) Potential energy in an electric field in a periodic crystal Ionic conduction depends on the mobility of the ions and therefore on temperature. At high temperatures, the conductivity can reach 1 S cm-1, which is of the same order of magnitude as for liquid electrolytes. The crystal has to contain unoccupied sites that are equivalent to the occupied sites by lattice oxygen ions. The energy barrier for migration from an occupied site to an unoccupied site must be small (≤1 eV). This might seem unusual since the relative size of the oxygen ions is big and it seems more likely that the smaller metal ions migrate in an electric field. That is why there are only a few special structures that make oxygen ion migration possible: fluorite structured oxides, perovskites, LAMOX family and BIMEVOXes. Fluorite oxides are the most common and classical oxygen ion conducting materials. The crystal structure consists of a cubic oxygen lattice with alternate body centres occupied by eight coordinated cations. The cations are arranged into a face centred cubic structure with the anions occupying the tetrahedral sites. This leaves a rather open structure with large octahedral interstitial void. Your browser does not support the video tag. Rotating zirconia lattice The general formula has the form AO2, where A is usually a big tetravalent cation, e.g. U, Th, Ce. Since Zr4+ is too small to sustain the fluorite structure at low temperatures, it has to be partly substituted with a larger cation, called dopant. Doping involves usually substituting lower valence cations into the lattice. In order to maintain charge neutrality oxygen vacancies have to be introduced, which allow oxygen ion migration. ![](figures/YSZ_sml.png) ![](figures/vacancy_transport_YSZ_sml.png) Vacancy transport in YSZAn interesting feature of the fluorite structure is that it can sustain a high degree of substitution. A very disordered and open structured material results from this, which promotes ionic conduction. ![](figures/tem.png) High resolution transmission electron micrograph depicting the nickel and YSZ interface (Image courtesy of ) By substituting the host cation sites with either rare earth or an alkaline earth element, just as with yttria-stabilised zirconia (YSZ), an increase of ionic conduction can be achieved. Zirconia (zirconium dioxide, ZrO2) in its pure form has a high melting temperature and a low thermal conductivity. The applications of pure zirconia are restricted because it shows polymorphism. It is *monoclinic* at room temperature and changes to the denser *tetragonal* phase from circa 1000 °C. This involves a large change in the volume and causes extensive cracking. Hence zirconia has a low thermal shock resistivity. The addition of some oxides results in stabilising the cubic phase and the creation of one oxygen vacancy, i.e. \(2[{\rm{Y}}{'\_{{\rm{Zr}}}}] = [{{\rm{V}}\_{{{\rm{O}}^{..}}}}]\) Y2O3(ZrO2) → 2YZr + 3Oxo + Vo   ![](figures/phase_diagram_YO_sml.png) Phase diagram of partially stabilized zirconia (PSZ) Partially stabilized zirconia (PSZ) is a mixture of zirconia polymorphs: a cubic and a metastable tetragonal ZrO2 phase is obtained, since an insufficient amount of stabilizer has been added. PSZ is also called tetragonal zirconia polycrystal: TZP. PSZ is a transformation-toughened material since the induced microcracks and stress fields absorb energy. PSZ is used for crucibles because it has a low thermal conductivity and a high melting temperature. The addition of 16 mol% CaO or 16 mol% MgO or 8 mol% Y2O3 (8YSZ) is enough to form fully stabilized zirconia. The structure becomes cubic solid solution, which has no phase transformation when heating from room temperature up to 2500 °C. Because of its high oxide ion conductivity, YSZ is often used for oxygen sensoring and solid oxide fuel cells. ![](figures/YSZ_surface_sml.png) Scanning electron micrograph showing 8 mol% Y2O3 (8YSZ) surface (Image courtesy of ) It might be expected that an increase of the dopant concentration would lead to an increase of conductivity. This correlation only applies to low dopant concentrations because at higher levels, the first and second electron coordination shells dopants start interacting with the oxygen vacancies and the conductivity decreases. The conductivity can be calculated as follows: $$\sigma = {A \over T}\left[ {{{\rm{V}}\_{{{\rm{O}}^{..}}}}} \right]\left[ {{{{\rm{\bar V}}}\_{{{\rm{O}}^{..}}}}} \right]\exp \left( {{{ - E} \over {RT}}} \right)$$ where E is the activation energy for conduction, T is the temperature, R and A are constants \(\left[ {{{{\rm{\bar V}}}\_{{{\rm{O}}^{..}}}}} \right]\) is an unoccupied oxygen vacancy.![](figures/conductivity_plot_sml.png) Conductivity as a function of temperature, data taken from . In present fuel cells, the electrolyte of choice is zirconia, stabilised by either 3 mol% Y2O3 (3YSZ) or 8 mol% Y2O3 (8YSZ). YSZ is not the best ion conductor, but it is the cheapest to process and has low enough electronic conductivity. There are many other materials that conduct oxides but the advantages of YSZ: abundance, chemical stability, non-toxicity and economics make it the most suitable material at present. Drawbacks are high thermal expansion coefficient and hence problems with sealing the fuel cell. The world demand for YSZ is rising, but luckily, Zr is one of the most common elements of the Earths crust usually in the form of silicate zircon (ZrSiO4). This material has to be purified since SiO2 tends to block the ionic and electron paths. Yttria is the main stabilizer used and about 13-16 wt% have to be added to give a fully stabilized cubic material. More rare dopants supply such as scandia could present a problem in the future. Another interesting fluorite structured material is CeO2 doped with 10 mol% GdO (GCO). It is especially useful for lower temperature applications. But GCO is an electron conductor in the reducing environment at the anode and hence short-circuiting is a problem. Fabrication of zirconia electrolyte films is usually done by *tape casting* or *vapour deposition*. The second highly interesting group of solid state ion conductors is perovskites. The general perovskite stoichiometry is ABO3. Your browser does not support the video tag. Rotating perovskite lattice Due to the number of combinations (2+4, 5+1, 3+3) to have the total charge of +6 on AB, the high stability of the structure and the wide variety of cations that can be accommodated within, perovskites have a wide range of properties, which are suitable not only for SOFCs but also as ferroelectrics, oxidation catalysts or superconductors. High ionic conductivity in perovskites is achieved by doping the material with trivalent elements, such as Y on the Zr site of BaZrO3 for example, so that oxygen vacancies are introduced. The conductivity of ABO3 perovskites strongly depends on the size of the A and less so on the size of the B cation, since the oxides have to migrate through a triangular space, consisting of two large A cations and one smaller B cation. The enlargement of this triangular space facilitates the migration of oxide ions through the lattice. Hence we can expect higher ion conductivity with larger lattice dimensions. In order to incorporate hydroxyl groups into onto the vacant oxide sites, the material is exposed to humid atmospheres. The second proton of the water molecule attaches to some other oxygen atom in the structure. Due to the loose bonding between the hydrogen ion and the oxygen atom, conduction occurs easily, by hydrogen ions jumping from one oxygen to the other. The perovskites LaGaO3 and BaZrO3 proved to be highly interesting since their structure is very tolerant and can accommodate large concentrations of dopants but less than in YSZ (hence the ion conductivity of perovskites is always less than that of YSZ). A particular composition: La0.9Sr0.1Ga0.8Mg0.2O3-d (LSGM) has been found to have similar, purely ionic conductivities to CeO2, also at low temperatures (<600 °C). The evaporation of Ga causes problems with stability though. Other problems, such as LSGM reactivity with Nickel electrodes have been solved by adding a CeO2 buffer layer between the two materials. In case of an electrolyte-supported design, the electrolyte has to be 120–150 mm thick. Operation at low temperatures requires a thin electrolyte since the conductivity is proportional to temperature. For operation temperatures between 600–800 °C, the electrolyte layer thickness cannot exceed 20 mm to ensure high enough conductivity.   ![](../images/divider400.jpg) #### Electrolyte fabrication process The latest production method involves electrochemical vapour deposition used to make tubular cells (at Westinghouse). Doped lanthanum manganite (cathode material) is placed in a low-pressure chamber and zirconium chloride plus yttrium chloride vapour is passed along the outside of the tube, as water vapour is passed on the inside of the tube. A more conventional production method is tape casting: ![](figures/tape_casting.png)Tape casting   • Electrode materials =<! .style1 { font-size: small; color: #333333; } >This section of the TLP introduces some of the most common materials used as electrodes in SOFCs. The electrodes have to possess the following properties to ensure a smooth operation of the fuel cell: * High electrical conductivity * High catalytic activity * High surface area * Compatibility with the electrolyte (and interconnect) For many years, platinum was the most common and only material that was used as the electrode. The biggest disadvantage of platinum is its cost: it is economically not viable to build fuel cells with platinum electrodes. Hence the pressure to find cheaper electrode materials such as perovskites (e.g. LaCoO3 and LaMnO3) was high. Further development of the perovskites was abandoned because of fast degradation due to their reactivity with YSZ. The **cathode** material is very important because the oxidation reaction determines the efficiency of the fuel cell. Since cathodes operate in a highly oxidising environment, it is impossible to use cheap base metals. The best compromise is semiconducting oxides, such as **doped** lanthanum cobaltites and lanthanum manganites. A new material: La0.8Sr0.2MnO3 (LSM) has the suitable properties: good electronic conductivity and matching heat expansion coefficient. Since the operating temperature of the fuel cells was reduced to below 1000 °C, it is possible to mix LSM with YSZ at 50/50 proportion to form the first surface on the electrolyte. Other materials have been used, such as La0.6Sr0.4Co0.2Fe0.8O3 (LSCF). LSCF has the advantage of lower power losses at lower temperatures and less susceptibility to poisoning by chromium and it is used with ceria based electrolytes. ![](figures/YSZ-LSM_sml.png) Reduction reaction on the surface of a cathode made of LSM-YSZ.(Image courtesy of ) The loss of cathode performance is mainly due to changes in the microstructure and the phase composition of the material at load conditions. If stainless steel interconnects are used, degradation caused by Cr poisoning occurs, since it evaporates from the steel and condensates preferably on the cathode. In general, the rate at which Cr poisoning occurs decreases with decreasing operating temperature. ![](figures/YSZ-Nickel_sml.png) Oxidation reaction on the surface of an anode made of Ni-YSZ (Image courtesy of ) Nickel is used as the **anode** because it is economical and exhibits high performance, although due to reasons of adherence and different expansion coefficients, it flakes off easily from the electrolyte unless it is mixed with zirconia, creating a cermet. Ni-YSZ anode allows a rapid and clean connection with the fuel and is a good electronic conductor although Ni is susceptible to become coated with a carbon layer when it is reacting with carbon based fuels, which could prevent further reaction. Certain additives to the Ni+YSZ cermet, such as 5% ceria or 1% molybdena inhibit this process. Besides catalysing the oxidation of hydrogen, Ni is also active in reforming of carbon containing fuels. Anodes, completely made of ceramics show good oxidation-reduction cyclability. ![](figures/holes.png) Scanning electron micrograph showing a nickel anode The contact sides of the electrolyte with the fuel and the oxidant are coated with the electrode material. To form *porous* contact layers, partially sintered materials are used and to allow a gradient of properties, such as heat expansion coefficient several layers with different compositions are laid down. Since the gas atoms discharge (or absorb) electrons at the anode (or cathode), a three-phase boundary zone is required: * Gas phase (high porosity of electrolyte required for better access) * Electrolyte phase for ion transport * Metal phase for electron conduction Hence a volumetric three dimensional region has to be provided for the reaction. The most common methods of applying the electrode layer on the electrolyte are *plasma spraying, vapour deposition, solution coating* and *colloidal ink* methods.   • Interconnection =<! .style1 { font-size: small; color: #333333; } >For the interconnection, an inert and impervious material is needed. It should withstand both oxidising and reducing environments. Lanthanum chromite seems to have the necessary properties in systems operating at 1000 °C. Depending on doping, this material matches the thermal expansion coefficient of LSM. For lower temperatures, metallic based alloys can be used. Again, plasma spraying is the most economic method of applying the interconnect layer on the electrode. Although lanthanum provides cell life times of up to 70,000 h, it is not perfectly inert: it expands in the presence of hydrogen, causing cracking, especially at large planar stacks. ![](figures/interconnect.png) Section of SOFC stack with interconnect. From top to bottom: steel, Ni-mesh, cell, contact paste, interconnect steel (Image courtesy of ) Large lanthanum chromite interconnects are made from fine powder, which is prepared as a mixture of the desired components: lanthanum, strontium and chromium nitrate. This mixture is reacted with glycene at high temperatures. This can be compacted to form plates or extruded to make tubes. It is difficult to sinter the powder to full density.   • Thermodynamics <! .style1 {color: #0033FF} .style2 { font-family: "Times New Roman", Times, serif; font-style: italic; } .style3 { font-size: small; color: #333333; } .style4 {font-family: Symbol} >A SOFC is an electrochemical device that converts chemical energy of the fuel and oxidant directly and reversibly into electrical energy. They are not a better or improved heat engine, they are fundamentally different. Hydrogen and oxygen are going to be used to illustrate the thermodynamics of the fuel cell. ![](../images/divider400.jpg) #### The ideal reversible SOFC – Basic derivation of potential and efficiency The first and second law of thermodynamics describe the reversible SOFC. The reactants (fuel and air) deliver total enthalpy \(\sum {{n\_i}{H\_i}} \) and the total enthalpy \(\sum {{n\_j}{H\_j}} \) leaves the fuel cell, so the change in enthalpy, ΔH = \(\sum {{n\_j}{H\_j}} \) –  \(\sum {{n\_i}{H\_i}} \). The heat qFC has to be extracted from the fuel cell and the reversible work wFC delivered. ![](figures/thermo_fc_diagram_sml.png) 1st Law of thermodynamics: \({q\_{FC}} + {w\_{FC}} = \Delta H\) 2nd Law of thermodynamics: \(\oint {dS = 0} \) The reaction entropy has to be compensated by the transport of heat to the environment: \(\Delta S - \) \({{{q\_{FC}}} \over {{T\_{FC}}}} \)\( = 0\) The reversible work is from the above equations: \({w\_{FC}} = \Delta H - {T\_{FC}} \cdot \Delta S\) The Gibbs enthalpy is equal to the reversible work of the reaction. The reversible efficiency is equal to the ratio of the Gibbs and the reaction enthalpy: $${\eta \_{FC}} = {{\Delta G} \over {\Delta H}} = {{\Delta H - {T\_{FC}} \cdot \Delta S} \over {\Delta H}}$$ The fuel cell is an electrical device, for which the processes can be fully described by thermodynamic principles. Hydrogen is absorbed at the anode, it is ionized and the electrons are conducted away to do useful work. Oxygen atoms that arrive at the cathode are ionized by the electrons coming from the anode. The protons and oxide ions react and form water. | | | - | | Anode: H2 ↔ 2H+ + 2e– Cathode: ½O2 + 2e–↔ O2– The net reaction is: 2H+ + O2– ↔ H2O | This shows that the molar flow of hydrogen is twice the molar flow of oxygen. The electric current is therefore: $$I = {\dot n\_{el}} \cdot \left( { - e} \right) \cdot {N\_A} = - 2{\dot n\_{{H\_2}}} \cdot F$$ ![](figures/cell_voltage_temp_sml.png) The reversible cell voltage of different fuels at different states (p, T) of the environment   The electric current is a measure of the rate at which fuel is spent. The electric and thermodynamic quantities are matched by considering reversible power: $$P = V \cdot I = {\dot n\_{{H\_2}}} \cdot w = {\dot n\_{{H\_2}}} \cdot \Delta G$$ Hence the reversible voltage: $$V = {{ - {{\dot n}\_{{H\_2}}} \cdot \Delta G} \over {{{\dot n}\_{el}} \cdot F}} = {{ - \Delta G} \over {{{\dot n}\_{el}} \cdot F}}$$ Using the assumption that we have (near) ideal gases, a more accurate equation for the Gibbs energy is: $$\Delta G(T,p) = \Delta H(T) - T \cdot \Delta S(T,p)$$ where $$S(T,p) = {S^0} + \int\limits\_{{T\_0}}^T {{{{C\_p}(T)} \over t}dt} - R \cdot \ln ({p \over {{p\_0}}})$$ We get for the Gibbs energy: $$\Delta G(T,p) = \Delta G(T) + T \cdot R \cdot \ln (K)$$ where K is the equilibrium constant. We get from the above analysis the Nernst potential: $${V\_N} = {{ - \Delta G(T)} \over {{n\_{el}} \cdot F}} - {{R \cdot T \cdot \ln (K)} \over {{n\_{el}} \cdot F}}$$   ![](../images/divider400.jpg)  Going further: ![](../images/divider400.jpg)   • System and outlook <! .style1 { font-size: small; color: #333333; } .style2 {color: #0033FF} >The operating temperature of an SOFC is relatively high. A typical SOFC power plant is fuelled with natural gas because of the lack of a hydrogen infrastructure. A plant must have three main components: 1. The preheater raises the temperature of the fuel and air to near the operating temperature. At the same time, the preaheater reforms the gas by steam reforming to hydrogen. Steam reforming constitutes of two steps: | | | - | | Methane Reforming:   CH4 + H2O → CO + 3H2 Water Gas Shift:         CO + H2O → CO2 + H2 Overall Reaction:        CH4 + H2O → CO2 + 4H2 | 2. The cell stack electrochemically oxidises the hydrogen stream, drawing oxide ions through the electrolyte from the air stream. | | | - | | Electrochemical reaction: H2 + ½O2 → H2O | ![](figures/reforming_sml.png) ![](figures/FC_system_sml.png)The schematic diagram above depicts a complete, 250 kW fuel cell system (Source: ).   3. The lower cycle utilises the exhaust energy. The exhaust gases are so hot that gas turbines can be driven to generate additional electrical energy and thus increasing the efficiency of the fuel cell system up to 80%. ![](figures/siemens_1.png)  ![](figures/siemens_2_sml.png) SFC-200, a 125 kW SOFC cogeneration system (Source: ) The durability of the SOFC is mainly determined by the processes occurring during thermal cycles, oxidation-reduction cycles and the sulphur contamination (even at high temperatures, sulphur is absorbed by the anode).   ### #### Fuel One of the great advantages of the SOFC is that it can use a big range of fuels, depending on the cathode composition. Due to the high operating temperature, internal reforming can take place at the anode, when steam is added to the fuel. The reaction of methane is as follows: | | | - | | CH4 + H2O → CO + 3H2 | Both hydrogen and carbon monoxide can react with the oxide ions. A shift reaction also occurs at the anode since the reaction of CO is slow, producing more hydrogen. | | | - | | CO + H2O → CO2 + H2 | The disadvantage of using hydrocarbon fuels is the possible formation of coke on the anode: | | | - | | 2CO → CO2 + C CH4→ 2H2 + C | As mentioned above, impurities, such as sulphur are also damaging to the SOFC. Only desulphurised natural gas can be used as fuel. Other additives (more than 100 different molecules are present in commercial gasoline) can have damaging effects on the nickel anode. The activity of the nickel anode decreases due to sintering and coke formation when carbon containing fuels are used. The ceramic parts can easily break if vibrational forces are present. This is one reason, why SOFCs are best suited for stationary applications rather than mobile applications. The ultimate goal is to build a decentralised network of medium sized power generating SOFCs that can supply a small community with electricity with a much higher reliability and minor consequences in case of failure compared to the current system of few but very large power plants.   Molten carbonate fuel cells (MCFCs) =<! .style1 {color: #0033FF} .style3 { color: #333333; font-size: small; } >#### High temperature cells Molten Carbonate Fuel Cells (MCFCs) are another type of high temperature fuel cell. A molten mixture of salts: lithium, sodium, potassium carbonate is used as the electrolyte. These salts melt and conduct carbonate ions (CO32–) from the anode to the cathode when heated to about 600°C. Hydrocarbons have to be used as part of the fuel since the charge carriers in the electrolyte are carbonate ions. Hydrogen is also needed at the anode. It is gained by internal reforming of hydrocarbon based fuels. The electrodes should be resistant to poisoning by carbon. The high exhaust temperature makes cogeneration of electricity with turbines possible; hence the efficiency (60% without and 80% with hybrid technology) is relatively high compared to other fuel cell systems. MCFCs are mainly used for stationary power generation in the 50 kW to 5 MW range. Since it uses a liquid and high temperature electrolyte, it is rather unsuitable for mobile applications. The main problem with MCFC is the slow dissolution of the cathode in the electrolyte. Most of the research is therefore in the area of more durable materials and cathodes. ![](figures/MCFC_photo.png) Molten Carbonate Fuel Cell![](../images/divider400.jpg) #### Historical summary Both the solid oxide and the molten carbonate fuel cells are high temperature devices.  Their development followed similar lines until the late 1950's. First, E. Baur and H. Preis experimented with solid oxide electrolytes in Switzerland. The technical problems they encountered were again tackled by the Russian scientist O.K. Davtyan without success though. In the late 1950's, Dutch scientists G.H.J. Broers and J.A.A. Ketelaar focused on molten carbonate salts as electrolyte. By 1960, they reported the first MCFC prototype. In the mid-1960's, the US Armys Mobility Equipment Research and Development Center (MERDC) tested several MCFCs made by Texas Instruments ranging from 100 to 1000 Watts. Ishikawjima Heavy Industries showed in Japan in the early 1990s that a 1000 Watt MCFC power generator can operate for 10000 hours continuously. Other large power plants with outputs of up to 3 megawatts are already planned. ![](figures/MCFC_plant_sml.png) M-C Power's molten carbonate fuel cell power plant in San Diego, California, 1997. Smithsonian Institution, from the Science Service Historical Images Collection, courtesy of National Energy Technology Laboratory. The MCFC has been under development for 15 years as a stationary electric power plant. Although when most problems with the Solid Oxide Fuel Cell are solved, work on the MCFC might be stopped. • Electrolyte = <! .style1 {font-family: "Times New Roman", Times, serif} > In most cases, the electrolyte of the MCFC is made of a lithium-potassium carbonate salt heated to 600-1000°C. At this temperature, the salt is in liquid phase and can conduct ions between the two electrodes. The typical mixture ratio of the electrolyte is 62 m/o Li2CO3 and 38 m/o K2CO3 (62/38 Li/K). This particular mixture of carbonate salts melts at 550°C and when it is mixed with lithium aluminate (LiAlO2 is a ceramic matrix retaining the molten salts, it can be used both as an ion-conducting electrolyte and gasketing for the fuel cell stack. Negative carbonate ions (CO32–) are responsible for conduction. As discussed above, the long term performance is an issue for MCFCs. ![](figures/MCFC_schematic.png) The following properties of the electrolyte have to be taken into account: * Volatility of different alkali metal hydroxides generated in the moist cathode atmosphere * Solubility of the cathode (NiO) * Segregation of the electrolytes * Both oxygen and carbon dioxide solubility in the electrolyte * Oxygen reduction kinetics Loss of electrolyte mainly occurs at the cathode via hydrolysis: | | | - | | MeCO3 + H2O ⇌ 2MeOH + CO2 | Especially the electrolyte that has been used unaltered for a long time (since Ketelaar and Broers), Li2CO3/K2CO3 (38/62) eutectic, has a relatively high volatility. This causes the fuel cell to dry out. The partial pressure due to MeOH varies with the square root of water vapour to carbon dioxide vapour pressures: $$p(M{e\_i}OH) = {K\_i}(T)\sqrt {{{{p\_{{H\_2}O}}} \over {{p\_{C{O\_2}}}}}} $$ Ki(T) is the equilibrium coefficient of the carbonate ion in the melt according to the equilibrium equation: | | | - | | CO32– + H2O ⇌ CO2 + 2OH– | The anode off-gas on the other hand can be mixed with air after combustion and reused in the cathode chamber with a sufficiently high air excess. Here oxygen and carbon dioxide are consumed in a ratio of 1:2 (molar) by the cathode process. The change in composition of the electrolyte due to segregation and the volatility of certain species may result in a change in melting temperature. The electrolyte can solidify, causing the fuel cell to malfunction and sometimes allowing gases to break through. Carbonate ions from the electrolyte are used up in the reactions at the anode, so that we have to compensate for the loss by injecting carbon dioxide at the cathode. Segregation occurs as the potassium concentration increases near the cathode. This leads to an increased cathode solubility and hence a decline of cell performance. Recent studies show, that using Na instead of K can decrease the amount of segregation. In a molten binary salt with a common anion, segregation occurs due to the difference in mobilities of the cations. The heavier and bigger potassium and sodium cation is faster lithium in mixtures that have a higher potassium concentration that K2CO3=0.32 (Chemla effect), whereas below this isotachic concentration, lithium is faster. The partial pressure of carbon dioxide has a much smaller effect on the current voltage curves than the partial pressure of oxygen. Mass transfer limitations are not observed for CO2 but for oxygen at low partial pressures. CO2 transport also hinders the cathode operation much less than oxygen transport since this gas has a much better solubility in carbonate melts than O. Only the Li2CO3/Na2CO3 eutectic with much lower sodium vapour pressures could assure long term performance of fuel cells. We can conclude that the Li/Na electrolyte is more reliable and safer than Li/K. Since the ionic conductivity of Li/Na carbonate melts is higher than that of Li/K carbonate melts, Li/Na is preferred material as the electrolyte. But since the electrolyte is in the liquid phase, the fuel cell needs a more complex design, compared to other technologies using a solid electrolyte.   • Electrochemistry External fuel processors are not needed for MCFCs since the fuels can be reformed internally at the high operating temperatures. Internal reforming includes converting methane and steam into a hydrogen rich gas. | | | - | | CH4 + H2O ⇌ CO + 3H2 | At the anode, hydrogen reacts with the carbonate ions to produce water, electrons and carbon dioxide. | | | - | | H2 + CO32– ⇌ H2O + CO2 + 2e– | The electrons are conducted away by an external circuit to do useful work to the cathode. Oxygen from the air and carbon dioxide from the anode react at the cathode with electrons to form water and carbonate ions. | | | - | | O2 + CO2 +2e– ⇌ CO32– | The carbonate ions migrate through the electrolyte to the anode, and complete the electrical circuit. CO32– is used up at the anode; CO2 is needed at the cathode.   • Electrodes <! function MM\_openBrWindow(theURL,winName,features) { //v2.0 window.open(theURL,winName,features); } //> <! .style1 { color: #333333; font-size: small; } >A significant advantage of the MCFC is that non-noble metals can be used as electrodes. At the high operating temperature, a Nickel anode and the Nickel oxide cathode is able to promote the electrochemical reaction. This means lower production costs compared to low temperature fuel cell, where the catalyst electrode is usually made of platinum. The Ni electrodes are less prone to CO poisoning, hence coal based fuel can be used, especially since internal reforming can take place. ![](figures/solubility.png) Solubility of electrode in electrolyte The main problem with the electrodes is their solubility in the electrolyte by , which is a dissolution/reprecipitation process. It decreases the internal surface of the porous nickel oxide cathode, causing it to deteriorate. The solubility of nickel oxide (cathode material) is dependant on the cathode potential and temperature. The solubility of Ni and NiO in Li/Na was found to be lower than in Li/K melts. Although Li/Na melts have been found to have superior performance compared to Li/K melts, the lower oxygen solubility reduces the cathode performance on lean gas with a low oxygen partial pressure (below 0.1 bars).   Proton exchange membrane fuel cells (PEMFCs) <! .style1 {color: #0033FF} .style2 { font-size: small; color: #333333; } >#### Low temperature cells The proton exchange membrane (a.k.a. polymer electrolyte membrane) fuel cell uses a polymeric electrolyte. This proton-conducting polymer forms the heart of each cell and electrodes (usually made of porous carbon with catalytic platinum incorporated into them) are bonded to either side of it to form a one-piece membrane-electrode assembly (MEA). A quick overview of some key advantages that make PEMs such a promising technology for the automotive markets: * Low temperature operation, and hence * Quick start up * No corrosive liquids involved * Will work in any orientation (or zero g for that matter) * Thin Membrane-electrode assemblies allow compact cells   ![](../images/divider400.jpg) #### Brief history The PEM fuel cell was developed in the 1960s in General Electrics labs. As with so many technologies, the space program and military funded research fast-forwarded its development. PEM membranes were first applied to a US Navy project and projects for the US Signal Corps. PEM cells were used in NASAs Gemini program, which was to serve as a means of testing technology for the Apollo missions. Batteries were not suitable for a journey to the moon because of the extended flight duration. Early PEM systems were, however, unreliable and plagued with leakages and contamination. The systems installed in Gemini spaceships had an operational lifetime of just 500 hrs, although this was considered suitable. Another issue was the water management systems, which are required to keep the membrane hydrated to the correct extent. Apollo designers opted for the more mature technology of AFCs, as did the Space Shuttle designers in the 70's. Recently however, as part of NASAs program of continuous upgrade on the Shuttles, PEM systems have replaced the aging AFC technology as the primary power source for the Shuttles systems.  GE decided to abandon their research on PEMFCs in the 70s, probably due to the cost. At that time, the catalysis required 28 mg of Platinum per cm2 of electrode, compared to the current figure of 0.2 mg cm–2, or less. Automobiles are arguably one of the most important consumer products on the planet. The finite fuel reserves, which they are chewing through, are not currently a limiting factor, but they will be soon. Much investment has been aimed at developing fuel cell technology for the automotive industry and the electrolyte of choice is the PEM. Well look at the problems which automotive companies need to overcome before fuel cell cars hit the street. However, recent developments in PEMFCs have brought their current densities up to around 1 A cm–2 and cut the platinum requirement to 1% of what used to be needed. The scope of PEMFCs is, arguably, wider than that of any other power supply technology; with the potential to power a range of devices from mobile phones and laptops to busses, boats and houses.   ![](../images/divider400.jpg) #### Construction of the PEM cell The PEMFC is constructed in layers of bipolar plates, electrodes and membranes: ![](figures/pem_components.png) PEMFC components Each individual cell produces about 0.7 V EMF when operating in air, as calculated by the expressions outlined in the efficiency section. In order to produce a useful voltage, the electrodes of many cells must be linked in series. In addition to connecting the cells, we must ensure that reactant gases can still reach the electrodes and that the resistance of the electrodes has a minimal effect. If, for example, we were to simply wire up the edge of the anode of one cell to the cathode of another, electrons would have to flow across the face of the electrodes. Each cell only produces ~0.7 V, even a small reduction in this isnt permissible, so cells arent normally wired up this way. A *Bipolar Plate* is used to interconnect the anode of one cell to the cathode of the next. It must evenly distribute reactant gases over the surface of the anode, and oxygen/air over the cathode. Bipolar plates may also need to carry a cooling fluid, and in addition, need to keep all these gases and cooling fluids separate. Design considerations: * The electrical contacts should be as large as possible * The plate should be thin to minimise resistance * Gas needs to flow easily across the plate Often these factors are antagonistic to each other, for instance, large contact area would reduce the width of the gas channels. A very simple bipolar plate might look like this: ![](figures/bipolar_plate_sml.png)  ![](figures/flat_plate_sofc_sml_250.png) A typical bipolar plate (Left) found in a plate-type PEM assembly (Right) Reactant gases flow at right angles to each other. In a simple plate design as above, the channels extend right to the edge. The reactant gases would probably be supplied to the system via *external manifolding* in this case. ![](figures/external_manifold_sml.png) External manifolding *External manifolding* is a very simple solution, and therefore carries out the job cheaply, but the technique has two major disadvantages. 1) The gaskets needed to seal the plates dont form a tight seal where the channels come to the edge of the plate, leading to localised leaks of the reactant gases. 2)  Additional channels for cooling fluids are very difficult to incorporate into an externally manifolded system, so all the cooling must be done by the air flowing across the cathode.  This means more air than is necessary for the reaction must be pumped through the channels, which in turn means the channels must be wider, that the chance of leaks is increased and that some of the energy produced must be used to power blowers. Whilst simplicity is always a bonus, external manifolding is rarely used in modern systems. ![](figures/nexa.png) In this image of a Ballard Nexus™ fuel cell system, the fan used to blow air through the stack for cooling is visible on the left of the stack. Most modern bipolar plates make use of *internal manifolding*. The three examples below show how this might be achieved. In each case, the channels do not run to the edge of the plates so a gasket could be fitted here and a gas-tight seal would be more easily achieved. ![](figures/internal_manifolding.png) Internal manifolding * The design on the left is a fairly simple parallel channels design; reactant gases would be blown into one end of the channel through one hole, and removed at the other hole. There are many different designs possible, and designers of bipolar plates are yet to reach an agreement on which type is best. In parallel designs, water or gas may build up along one of the channels causing a temporary blockage. In this case the reactants will happily continue to pass through the other channels and not clear the blockage. * The second design, a serpentine design, guarantees that if reactants are flowing at all, theyre flowing all along the channel and blockages are easily cleared. The problem in this case is that it takes more effort to push reactants through the long, winding path. * The third design is more of a compromise between the two and is the type of thing often seen in bipolar plate design. The channels are typically about 1 mm in width and depth. The pressure difference between the start and end of a channel must be engineered to overcome the surface tension of water droplets forming on the channel walls in order to clear blockages. Ballard, for example, achieve this pressure difference with rectangular plates in which the gases run across the long axis in a long parallel design. The material properties of a bipolar plate, as summed up by Ruge and Büchi (2001), must take into account several important factors: * Electrical conductivity >10 S cm–1 * Heat conductivity of 20 W m–1 K–1 if cooling fluid is integrated, 100 W m–1 K–1 if heat is removed from the edges. * Gas permeability < 10–7 mbar L s–1 cm–2 * Resistant to corrosion in an environment of acidic electrolyte, hydrogen, oxygen, heat and humidity. * Reasonably high stiffness E > 25 MPa * As ever, it should cost as little as possible. The plates must also be manufactured so that they are: * Thin for maximum stack volume * Light for minimum stack mass * Able to be produced quickly with a short cycle time These various and difficult specifications which must be met, along with the fact that modern electrodes require very little catalytic platinum, mean that the bipolar plate is the most expensive part of a modern fuel cell. ![](../images/divider400.jpg)   ![](../images/divider400.jpg) #### PEMFCs without bipolar plates As discussed, bipolar plates may provide excellent contact between cells, but they are expensive and complex. Some manufacturers, often on the smaller industrial scale, choose different techniques to link their cells. Cells could be connected simply edge to edge, reducing the possibility of leakage. One manufacturer (Intelligent Energy) produces cells with stainless steel bases through which hydrogen channels pass. The cathode current collector is a porous metal and these individual cell units are simply stacked with a piece of corrugated stainless steel between them. Its a simple solution which may gain popularity. In conclusion, we should note that although a broad range of bipolar plates techniques exist, none of them fully meet the criteria set above. There is lots of development still to be done in this area before we meet a new industry standard.   • Membrane <! .style2 { font-size: small; color: #333333; } .style3 {color: #0033FF} >Duponts Nafion™ ion exchange membrane forms the basis of the proton exchange membrane fuel cell. Each company involved in the development of PEMFCs may have their own variation on Nafion, however, theyre all based on the same sulphonated fluoropolymers and Nafion remains something of an industry standard in membranes, to which all others are compared (although it is not always most suitable). Nafion is a polymer based on PTFE (polytetrafluoroethylene). ![](figures/PTFE.png)PTFE Nafion is essentially PTFE containing a fraction of pendant sulphonic acid groups. (Nomenclature: “sulphonic acid group” usually refers to the un-dissociated SO3H group, where as “sulphonate” refers to the ionised SO3– group after the proton has dissociated). The ion containing fraction is normally given in terms of equivalent weight (i.e. number of grams of dry polymer per mole of acidic groups). The useful equivalent weight for Nafion ranges from 800‑1500 g mol–1. ![](figures/nafion.png)   ![](figures/dow.png) Nafion structure (Left) and a fluoropolymer (Right), made by DOW chemical company, also used in PEMFCs. The length of and the precise nature of the side chains vary between different brands of polymer. Common to all is the PTFE based fluorocarbon “backbone” of the polymer that has several desirable properties: * PTFE is hydrophobic - this means the hydrophilic sulphonate groups are effectively repelled by the chains and cluster together. * PTFE is extremely resilient to chemical attack – the environment within the membrane is hostile and very acidic. Hydrocarbon-based polymers would tend to degrade rapidly. * PTFE is a thermoplastic with high mechanical strength – meaning very thin membranes can be produced, reducing the thickness of each cell and increasing the power density of the stack.   ![](../images/divider400.jpg) #### Transport through the membrane The animation below demonstrates schematically the mechanism of proton transport in the proton exchange membrane. In reality, the protons would be strongly associated with water molecules and transported in the form of H3O+ hydronium ions, or even higher order cations. The Zundel (H5O2+ - basically a protonated water dimer) and the Eigen (H7O3+) cations are thought to be particularly important in transfer of protons from one hydronium to another. *Points to note:* * Sulphonic acids are highly acidic (pKa ~ –6 in Nafion) meaning they have a high tendency to dissociate into anions and protons (the effect of the aliens blood in the “Alien” films was produced with chlorosulphonic acid). It is of course, these protons that act as the charge carriers through the membrane. * In order for the polymer to conduct H+ it must be hydrated to the correct degree, in order to promote dissociation of ionic groups and provide a mechanism for proton transport.  Proton conductivity is strongly dependant on the water content of the membrane. The water in the membrane is localised to the hydrophilic groups, where the protons dissociate and are transported in a vehicular manner (by diffusion of hydrated protons) and also structurally (via proton transfer between hydrated clusters). * Typical PEMs have conductivity in the order of 0.01–0.1 S cm–1 at 80–90 °C, which is a far lower temperature than other solid-state (usually ceramic) electrolytes.   • Electrodes and membrane-electrode assembly <! .style1 {color: #0033FF} .style2 { font-size: small; color: #333333; } >#### Catalyst In the first fuel cells, platinum was used in relatively large quantities. This perhaps led to the false belief that most of the cost of a fuel cell is down to the platinum in it. Generally this is not the case. Platinum particles are deposited very finely onto carbon powders so that the platinum is very finely divided with a maximal surface area. With catalysts produced in this way, the raw material platinum cost is just $10 for a 1 kW cell stack. ![](figures/catalysts.jpg) Catalyst made of carbon powders deposited with platinum particles ![](../images/divider400.jpg) #### Bonding Before the catalyst layer is applied to the electrolyte, a coating of soluble electrolyte is brushed onto it. This ensures that there is good contact between the platinum and the electrolyte to achieve the important three-phase interaction between gas, catalyst and electrolyte necessary for the reaction to proceed. The catalyst can be applied to the membrane in one of two ways: Either the catalyst powder can be applied directly to the membrane, by rolling, spraying or printing, and then have the supporting electrode structure (often called the gas-diffusion layer) added afterwards, or the electrodes can be assembled separately and bonded to the membrane in complete form by hot pressing. The catalyst powder is sometimes mixed with PTFE to drive out product water and prevent the electrode becoming water logged. The “gas diffusion layer” is added between the catalyst and the bipolar plate to provide some rigidity to the MEA and to ensure ease of diffusion. This layer is usually composed of carbon cloth or carbon paper 0.2–0.5mm thickness, with more PTFE added to expel water. ![](figures/MEA.png) The membrane electrode assembly, once completed. • Efficiency and reaction conditions <! .style1 {color: #0033FF} .style2 {font-family: "Times New Roman", Times, serif} .style4 {font-family: "Times New Roman", Times, serif; font-style: italic; } .style5 {font-family: Symbol} >The proton exchange membrane is a solid-state electrolyte that functions at around 80 °C. Compared to the 1000 °C at which the become conductive, this is a low temperature. ![](../images/divider400.jpg) #### What temperature is preferable? One of the key banners under which fuel cells are marketed is their efficiency. We must consider how this efficiency comes about and what factors influence it. To do this we must look at the thermodynamics governing the fuel cell. * **Considering the energy of the system - the Gibbs free energy change.** Let's consider the energy of a hydrogen fuel cell system as follows: | | | | | - | - | - | | **INPUTS:** | **PROCESS:** | **OUTPUT:** | | Hydrogen | FUEL CELL | Electrical Energy = VIt | | Heat | | Oxygen | Water | Its easy to calculate the electrical power and energy output of the system: Power = VI  ;  Energy = VIt The “chemical energies” of inputs and outputs are a little more difficult to define. It is the change in Gibbs free energy that we must consider in this case (or more precisely Gf – the energy of formation - because we use the convention of comparison to pure elements in their standard states), which is the energy available to do external work. In the case of a fuel cell system this external work is pushing electrons through the external circuit, and past whatever impedances we put in their way. Work done by changes in volume and temperature between inputs and outputs is not harnessed by the fuel cell (although this is possible in turbine hybrid systems). The Gf of both O2 and H2 is zero, a useful result when dealing with a hydrogen oxygen fuel cell.  ΔGf refers to the difference in Gibbs free energy of formation between the inputs and the outputs, and is therefore a specific measure of the energy released by the reaction. ΔGf = Gf of products – Gf of reactants We usually consider this quantity to be per a mole of chemical. Let us find the chemical energy released during a nominal fuel cell reaction: H2 + ½O2 → H2O The Gibbs free energy of a system is defined as: G = H – TS Which leads to the change in free energy being expressed as: (1)     \(\Delta {\overline g \_f} = \Delta {\overline h \_f} - T\Delta \overline s \) Note that weve gone lower-case and added little lines above the letters. This is to signify that we are dealing with molar quantities, so that the units will be Joules per mole or something similar. The value Δ*h*f of is the difference between *h*f of the products and *h*f of the reactants. So for the reaction H2 + ½O2 → H2O, we have: (2)     \( \Delta {\overline h \_f} = {({\overline h \_f})\_{{H\_2}O}} - {({\overline h \_f})\_{{H\_2}}} - {1 \over 2}{({\overline h \_f})\_{{O\_2}}}\)![](eqn/eqn-efficiency/eq0004M.gif) And Δ*s* is the difference between entropy of the products and reactants so that: (3)     \( \Delta \overline s = {(\overline s )\_{{H\_2}O}} - {(\overline s )\_{{H\_2}}} - {1 \over 2}{(\overline s )\_{{O\_2}}}\)![](eqn/eqn-efficiency/eq0006M.gif) These values of  *s* and *h*fvary with temperature and pressure according to the equations given below. A full derivation of these equations is beyond the scope of this TLP but can be found in thermodynamics textbooks. It should be noted that we use 298 K as the standard temperature, which is necessary as an integration limit. The “T” subscript to the enthalpy, *h*, means the enthalpy at temperature T. (4)     $${\overline h \_T} = {\overline h \_{298}} + \int\limits\_{298}^T {{{\overline C }\_P}} dT$$ Similarly, the entropy, *s*, at temperature T is given by: (5)     $${\overline s \_T} = {\overline s \_{298}} + \int\limits\_{298}^T {{1 \over T}} {\overline C \_P}dT$$ Values for standard enthalpies and entropies are obtainable from tables and some are given below: | | | | | - | - | - | |   | ***h*f (J mol-1)** | ***s* (J mol-1 K-1)** | | **H2O liquid** | -285,838 | 70.05 | | **H2O Steam** | -241,827 | 188.83 | | **H2** | Zero (element in standard state) | 130.59 | | **O2** | Zero | 205.14 | We need to know the molar heat capacity at constant pressure, *C*p. This is not constant with temperature but can be described by empirical equations. Examples below, (units are J g mol-1 K-1): For H2: (6)     \({({\overline C \_P})\_{{H\_2}}} = 56.505 - 22,222.6{T^{ - 0.75}} + 116,500{T^{ - 1}} - 560,700{T^{ - 1.5}}\) And for steam: (7)     \({({\overline C \_P})\_{{H\_2}{O\_{(g)}}}} = 143.05 - 58.040{T^{0.25}} + 8.2751{T^{0.5}} - 0.036,989T\) And O2: (8)     \({({\overline C \_P})\_{{O\_2}}} = 37.432 + 2.010,2 \times {10^{ - 5}}{T^{1.5}} - 178,570{T^{ - 1.5}} + 2,368,800{T^{ - 2}}\) One may now proceed to substitute values of *C*p from equations (6)-(8) into equations (4) and (5) to yield integrals that solve to values of *h*f and *s* for H2O(g), H2 and O2 at any temperature. Substituting these values into equations (2) and (3) gives us values for Δ*h*f and Δ*s*, which can in turn be substituted into equation (1) to yield the all-important Δ*g*f value. The table below was calculated using these equations: | | | | - | - | | **Temperature (°C)** | **Δ*g*f (kJ mol-1)** | | 25 (liquid) | -237.2 | | 80 (liquid) | -228.2 | | 80 (steam) | -226.1 | | 100 | -225.2 | | 200 | -220.4 | | 400 | -210.3 | | 600 | -199.6 | | 800 | -188.6 | | 1000 | -177.4 | which can be handily represented in the following graph: ![](figures/gibbs_free_energy.png) The graph shows us, somewhat obviously, that the energy obtainable from a fuel cell reaction decreases with the temperature at which the reaction is carried out. This however is not the only reason for lack of efficiency at higher temperatures, as we shall see. ![](../images/divider400.jpg) #### Irreversibilities This Δ*g*f value represents the energy released by the reaction: H2 + ½O2 → H2O If there were no losses, if the fuel cell process operated completely *reversibly*, the fuel cell should be able to convert 100% of this Gibbs free energy into useful electrical energy. However losses, or irreversibilities, creep into every system. For example, some of the energy released in the reaction will inevitably leave the system as heat. Once it has floated away in warmer air, that energy cannot be recovered and turned back into useful electrical energy; so we call it irreversibility rather than a loss. ![](../images/divider400.jpg) #### ![](../images/divider400.jpg) #### Back to efficiency Calculating the efficiency of a fuel cell system is not easy, partly because it is hard to define what the term refers to in this situation. It would help to look at a situation where efficiency is a little more obvious. The efficiency limit of a heat engine (such as a gas or steam turbine or an internal combustion engine) is defined as: $${\rm{Carnot\;\; Limit}} = {{{T\_1} - {T\_2}} \over {{T\_1}}}$$ T1 stands for the maximum temperature of the engine and T2 represents the temperature of the exhaust, all in Kelvin. The limit stems from the fact that there is always going to be some energy, proportional to the T2, which is “lost”. We know that fuel cell systems are not seconded to this limit; so how do we give a measure of their efficiency? We have already seen that it is the **Gibbs free energy** that is converted into electrical energy. If not for *irreversibilities*, all of this could be converted to electrical energy giving a 100% maximum efficiency. We could therefore define the efficiency of a fuel cell system as: $${{{\rm{Electrical\; energy \;output}}} \over {{\rm{Gibbs\; free\; energy\; change}}}}$$ When this is the case however, no matter what the reaction conditions are, the efficiency limit is 100% and therefore the figure isnt really much use to anybody. A common measure of the energy contained in a fuel is its calorific value. This is a measure of the heat that would be produced by burning the fuel. A more precise measurement is the change in enthalpy of formation, Δ*h*f and as with the Gibbs energy, the convention is that a negative value translates to more energy released during reaction. Using this value we can obtain a more useful efficiency value for a fuel cell: $${{{\rm{Electrical\; energy \;output \;per \;mole \;of \;fuel}}} \over {\Delta {{\bar h}\_f}}}$$ Recall however that the Δ*h*f of steam is different to that of liquid water, hence there are two possible efficiency values for a given process depending on the state of the output. See table below listing the Δ*h*f values of the reaction H2 + ½O2 → H2O, depending on the form of the product. | | | | - | - | | H2O liquid | -285,838 | | H2O Steam | -241,827 | Papers quoting an efficiency value will usually say whether it refers to the higher heating value, the HHV (in which steam is the product); or the lower heating value, the LHV. We can express maximum possible efficiency as: $${{\Delta {{\overline g }\_f}} \over {\Delta \overline {{h\_f}} }} \times 100\% $$ Which is often termed the “thermodynamic efficiency” of the fuel cell. We can use this equation along with equation 9 to come up with various efficiency limits and maximum reversible EMF values. These are listed in the table below, which is an extension of the earlier table: | | | | | | - | - | - | - | | **Temperature (°C)** | **Δ*g*f(kJ mol-1)** | **Max EMF (V)** | **Efficiency Limit** | | 25 (liquid) | -237.2 | 1.23 | 83% | | 80 (liquid) | -228.2 | 1.18 | 80% | | 100 | -225.2 | 1.17 | 79% | | 200 | -220.4 | 1.14 | 77% | | 400 | -210.3 | 1.09 | 74% | | 600 | -199.6 | 1.04 | 70% | | 800 | -188.6 | 0.98 | 66% | | 1000 | -177.4 | 0.92 | 62% | ![](figures/efficiency_temp.png) ![](../images/divider400.jpg) #### Are higher temperatures better? Points to note about the graph above: * The graph above is quite specific to the hydrogen fuel cell. If we were to look at the cell fuelled by CO:> > CO + ½O2 → CO2 > > > Wed see the Δ*g*f becomes less negative even faster with increasing temperature so that the efficiency limit of 82% at 100 °C falls to 52% at 1000 °C.  On the other hand, the Δ*g*f of the methane fuelled reaction: > > > CH4 + 2O2 → CO2+ 2H2O > > > Hardly changes with temperature so the efficiency limit stays consistent. We must therefore remember that the temperature dependency of efficiency is characteristic to a reaction. > > > > > * It can be seen that fuel cells do not always have a higher efficiency limit than fuel cells.> > Since the efficiency limit of the H2 fuelled cell reduces with operational temperature, one might be tempted to conclude that hydrogen fuel cells should be run at the lowest possible temperature. There are several reasons why this is not the case: > > > 1. In higher temperature systems, the heat produced can be more useful. Turbine hybrid systems can be used to utilise the energy of the exhaust gases. This isnt as easy if the fuel cell is run at low temperatures. 2. The voltage losses (discussed later in this section) of the cell, the irreversibilities which are an inevitable part of the process, are generally more significant at higher temperature. ![](../images/divider400.jpg) #### ![](../images/divider400.jpg) In conclusion, the factors determining the efficiency of the system are manifold and various. We can estimate the efficiency obtained by simply measuring the open circuit voltage of the fuel cell Vc and we can make sense of this by looking at the reaction and how the Gibbs free energy is transferred between reactant and product species.   Fuelling requirements =<! .style4 {font-family: "Times New Roman", Times, serif; font-style: italic; } > | | | | | - | - | - | |   Cell Type   |   Fuelling requirements |   Solution | | AFC | Alkaline fuel cells, using KOH as the electrolyte, require pure H2 and O2 as their fuel. This is because even the small amounts of CO2 in the air (about 300 ppm) are enough to prevent the cell from functioning properly. This is because CO2 reacts with the KOH as follows:     2KOH  +  CO2 →  K2CO3  +  H2O The presence of potassium carbonate in the electrolyte may reduce cell activity by:* Reducing the concentration of OH– * Increasing the viscosity of the solution, hindering the diffusion of ions through it. * Reducing the solubility of oxygen in the solution * Precipitation of the K2CO2, which would reduce the surface area of the electrodes. | Use of Pure Oxygen at the cathodes is necessary. This can be achieved by:* Regenerative methods i.e. using some external power source (e.g. photovoltaic cells) to hydrolyse H2O in situ and store the resulting gases. * Remove the CO2 from the air. One proposed method takes advantage of the heat exchanging stage necessary in using cryogenically stored hydrogen (the H2 must be warmed and the cell must be cooled) to freeze the CO2 out of the air. | | Species: | H2 | CO | CH4 | CO2 and H2O | S (e.g. H2S and COS) | | Effect: | Fuel | Poison | Diluent | Poison | Unknown | | PAFC | The phosphoric acid (H3PO4) systems are almost always fuelled by a hydrocarbon, which must undergo some sort of **fuel processing** (add link) to release the H2 gas that reacts at the anodes. The PAFC systems can tolerate CO2 and un-reacted hydrocarbons (e.g. methane). These act as diluents to the fuel and their concentrations should be minimised. CO gas however will poison the Pt catalysts at concentrations from 0.5% | * In Situ fuel possessing required (costly) * CO removed by further processing and shift reaction. | | Species: | H2 | CO | CH4 | CO2/H2O | S etc | | Effect: | Fuel | Poison (>0.5%) | Diluent | Diluent | Poison (>50 ppm) | | MCFC | CH4 can either be a diluent or it may be internally reformed (see SOFCs). Very low sulphur tolerance in the MCFC due to poisoning, especially in the catalysts that aid fuel reforming. The reactions of the MCFC requires CO2 to be present in the fuel: H2 + ½O2 + CO2 (cathode) → H2O + CO2 (anode) Recall that CO32– ions are transported through the electrolyte from cathode to anode (see section on high temperature fuel cells). This requirement for CO2 contrasts with the AFC where it must be excluded. The CO2 is usually recycled externally by passing exhaust gases through a combustor to convert unused fuel into water and CO2, which can be fed back to the cathode inlet. This also serves to preheat the reactant air. | * CO2 recycled externally * Exhaust gases combusted to yield more CO2 * Sulphur removed from fuel stream by fuel processors. | | Species: | H2 | CO | CH4 | CO2/H2O | S etc | | Effect: | Fuel | Fuel (via shift reaction) | Can be internally reformed | Diluent | Poison (>0.5 ppm) | | SOFC | SOFCs (and MCFCs) run at high enough temperatures to internally reform CO and hydrocarbons (e.g. petrol or methane) via reaction with H2O, producing CO2 and H2 via “shift” (or oxygenolysis) reactions: C*n*H*m*+*n*H2O → *n*CO + (*m*/2 + *n*) H2 CO + H2O → CO2 + H2   | * Sulphur removed from fuel stream by fuel processors. | | Species: | H2 | CO | CH4 | CO2/H2O | S etc | | Effect: | Fuel | Fuel (via shift reaction) | Can be internally reformed | Diluent | Poison (>1.0 ppm) | | PEMFC | PEM Cells generally use pure Hydrogen as the fuel, especially in portable applications where complicated reforming apparatus would be impractical. CO poisons PEM systems very easily because they rely on platinum catalysts. CO has a high affinity for Pt and occupies catalytic sites, preventing hydrogen fuel from preaching them. The processing equipment needed to reduce CO partial pressures to less than 10 ppm adds considerably to the cost of the system. Pure oxygen is used in air independent applications such as submarines and space shuttles. Whilst difficult to implement, use of pure O2 improves the performance of PEM cells significantly:* The open circuit voltage increases due to increased O2 partial pressure, as described by the Nernst equation. * Activation over-potential reduces because of better use of catalyst sites. * Limiting current increases, reducing concentration over-potential losses, due to the absence of nitrogen gas. | * CO must be removed to ensure long life of cell. * Pure Hydrogen can be used in either a Compressed form or a cryogenic or metal-hydride stored form. | | Species: | H2 | CO | CH4 | CO2/H2O | S etc | | Effect: | Fuel | Poison (>10 ppm) | Diluent | Diluent | Unknown |   Case study: Fuelling practicalities =<! .style1 {color: #0033FF} .style2 { font-size: small; color: #333333; } >Supplying the cells anodes with a constant and consistent supply of hydrogen is no small task. The following diagram shows some ways this can be achieved: ![](figures/fuel_chart_sml.png) This diagram should be considered it in terms of three levels; where we obtain the energy, how it is used to make hydrogen, and how that hydrogen is stored and used at the cell. Crude oil derived fuels still account for more than half of the worlds total energy supply (petrol, diesel, aviation fuel, kerosene). These consist of simple and aromatic hydrocarbons of varying length. Add to this the consumption of coal and natural gases and the fossil fuel contribution to global energy supply goes well above 80%.  Fossil fuels can either be utilised directly in some high temperature fuel cells, or reformed to hydrogen (see notes on ), or used in conventional power plants to produce electricity, which could then power electrolysers. Renewable fuels are useful bio-matters, such as wood or plants, and renewable gas sources such as the methane produced as waste in landfill sites decomposes. These can again be used to make hydrogen by reforming, bio-generation or by conventional power plants and electrolysers. Nuclear power can power electrolysers to make hydrogen indirectly. The animation below summarises the above: ![](../images/divider400.jpg) #### Fuel storage Once we have the hydrogen, there are several ways in which it can be stored: * Compression in gas cylinders * Cryogenic liquid * A metal absorber as a metal hydride * Potentially in carbon nano-fibres (still under development) * Glass micro-spheres The specific energy of pure hydrogen (the energy per kilo) is higher than any other fuel at about 120 MJ kg-1. However, its energy density (energy per m3) is very low. | | | | | - | - | - | | **Form of storage** | **Energy density by weight kWh kg-1** | **Energy density by volume kWh L-1** | | Gas (20 MPa) | 33.3 | 0.53 | | Gas (24.8 MPa) | 33.3 | 0.64 | | Gas (30 MPa) | 33.3 | 0.75 | | Cryogenic liquid (-253 °C) | 33.3 | 2.36 | | Metal hydride | 0.58 | 3.18 | Its difficult to get a large mass of hydrogen into a small volume. Whilst fossil fuels can be processed to provide an on-demand supply of hydrogen, or even utilised directly by some cells, there are many applications for which a pure hydrogen supply is the only viable solution. These applications might include anywhere where there is limited space for reforming equipment or where it would be too costly, where emissions must be tightly controlled perhaps or where adding the additional fuel processing equipment would unacceptably reduce the efficiency of the system. Electrical energy output from wind turbines and hydroelectric generators might also be stored as hydrogen. The safe storage of pure hydrogen presents a tempting challenge to engineers and material scientists. Either very high pressure must be used to squeeze the required mass into a required volume, or very low temperatures used to liquefy it. Hydrogen however, does not liquefy above 22K and even in its liquid phase has rather a low density of 71 kg m-3.   ![](../images/divider400.jpg) #### Ways of storing pure H2 | | | | | - | - | - | | Method: | Pros: | Cons: | | **High-pressure storage** of gas. Hydrogen gas can be stored at pressures of up to 30 MPa (300 atm). Composite tanks can be used with aluminium liners and a Kevlar-epoxy shell. | * Simple * No problems with long-term storage. * No need for high purity hydrogen * Can pressurise the system from the tank  | * Tanks must be very carefully manufactured * Failure can be very dangerous * Diffusion of hydrogen into the surface of the aluminium leading to embrittlement * Compression requires energy * Low energy density | | **Liquid hydrogen** (LH2) stored at 22 K in thermally insulated tanks, usually like large vacuum flasks. State of the art tanks cool the 22 K LH2 with liquid air at 78 K. | * Widest used method of storing large quantities of hydrogen. * Containment failure less dangerous than with high-pressure storage. | * Gas will slowly boil off so use would require large-scale distribution of filling stations. * Liquefaction requires a lot of energy * Safety concerns of cryogenic liquids * Still low energy density | | **Metal hydrides** | * Very safe storage as if the tank ruptures, the fuel will not discharge rapidly. * Constant pressure * Very pure hydrogen is released * Indefinitely reusable.  | * decreasing effectiveness over time * Quite a heavy way of storing H2; only about 1-2% of tanks overall mass is hydrogen. * Still requires a high pressure tank | | **Glass micro-spheres** can be encouraged to absorb H2 gas at higher temperature, then retain it at lower temperatures before releasing it again when the temperature is raised. A technology yet to be proven.   | * Safe storage * Very reusable and un-contaminable. * Lots of H2 per unit volume | * Complicated * Expensive * Delicate  | | **Carbon nano-fibres** There was a lot of hype about these after a paper in 1998 claimed amazing results. The idea is that they provide a huge surface for H2 to bind to and increase the amount of H2 that can be stored for a given pressure in a tank. Maybe these will work one day, but currently the results are cause for scepticism.Structure of carbon nano-fibres | * Allegedly capable of absorbing 67% times their own mass of hydrogen. * Safe, inert storage once again | * An unproven technology with results that so far have not been reproducible enough. * Some long-term safety concerns with nano-fibres. Theyre thought to be carcinogenic. * Carbon nano-fibres are currently rather expensive. |   ![](../images/divider400.jpg) ![](../images/divider400.jpg)   Summary = Problems such as CO2 emissions and the sustainability of our current energy supplies are going to result in the western world adopting lasting changes to the way we harness energy. It is likely that fuel cell technology has an important part to play in this future. This TLP should have taught the following: * The principles, construction and thermodynamics of a fuel cell * Differences between types of fuel cell – high temperature, low temperature, the key components of each. * How the technology can be applied and possible future applications * The complications involved in the fuel cell system as a whole * The basis of a future hydrogen economy * Problems associated with storing hydrogen It is important to remember that the sums spent on developing fuel cells thus far have been very large indeed but, as of yet, returns are sparse. Few people will encounter fuel cells as part of their every day lives, but perhaps this is set to change in near future. Questions = ### Quick questions*You should be able to answer these questions without too much difficulty after studying this TLP. If not, then you should go through it again!*1. For each type of fuel cell below, what are the charge carrying species (electrolyte type)? (a) Polymeric exchange membrane (b) Immobilised alkaline solution (c) Phosphoric acid (d) Molten carbonate (e) Ceramic solid oxide 2. Which cells are able to internally reform? 3. The efficiency limit of H2 cells reduces as operating temperature is increased. Why are high efficiency systems often run at very high temperatures? 4. Which of the following statements are TRUE about the development of fuel cells? (a) Fuel cells have been around since the 19th century. (b) A fuel cell tractor was built in 1959. (c) PEM fuel cells are so expensive because of the platinum needed as a catalyst. (d) The first fuel cells were developed to run on hydrogen. (e) Fuel cell cars are at least a decade away. 5. Give two reasons why we dont all use fuel cells in our cars already? 6. Why is nickel used as the anode in SOFCs? How is it treated in order to aid bonding to YSZ?### Deeper questions*The following questions require some thought and reaching the answer may require you to think beyond the contents of this TLP.*7. Give as many ways as you can think of to provide a mobile fuel cell with pure hydrogen. Going further = **Books:** 1. J. Larminie and A. Dicks, Fuel Cell Systems Explained, 2nd edition, John Wiley and Sons Ltd (2003). 2. M.H. Westbrook, The Electric Car: Development and Future of Battery, Hybrid and Fuel-Cell Cars, London: Institution of Electrical Engineers; Society of Automotive Engineers (2001). 3. High-temperature Solid Oxide Fuel Cells: Fundamentals, Design and Applications by  S.C. Singhal and K. Kendall (Eds.), Publisher: Elsevier Science (2004).  
Aims On completion of this TLP you should: * understand what is meant by the *glass transition* in amorphous polymers, and its causes * understand kinetic and thermodynamic analyses of the formation of glasses * appreciate that the value of the *glass transition temperature* depends on the method of measurement, and also on the strain rate or cooling or heating rate * understand how the glass transition affects the properties of rubber Introduction Collections of molecules can exist in three possible physical states: solid, liquid and gas. In polymeric materials, things are not so straightforward. For example most polymers will decompose before they boil, and cross-linked polymers decompose before they melt. For many polymers the transition between the solid and liquid states is rather diffuse and difficult to pinpoint. Amorphous polymers are viscous liquids when they are held at temperatures above their *glass transition temperature*, Tg. Below Tg, the material is solid, yet has no long-range molecular order and so is non-crystalline. In other words, the material is an amorphous solid, or a glass. The glass transition temperature is different for each polymer, but many polymers are above Tg at room temperature. In many cases the polymers are at least partially crystalline at room temperature and the temperature at which the crystals melt (Tm) is above room temperature. The graph below shows how some polymers are above Tg but below Tm at room temperature. Such polymers are rubbers (so long as they are largely amorphous) at room temperature. However, the polymer may flow like a liquid over long time periods as its amorphous component relaxes under the polymer's weight. ![Chart showing values of Tg and Tm for various polymers](images/image001.gif) The glass transition of a polymer is related to the thermal energy required to allow changes in the conformation of the molecules at a microscopic level, and above Tg there is sufficient thermal energy for these changes to occur. However, the transition is not a sharp one, nor is it thermodynamically well defined. It is therefore different from melting a crystal, as will be explained later. A distinct change from rubbery (above Tg) to glassy (below Tg) behaviour is readily observable in a wide range of polymers over a relatively narrow temperature range. In the following sections the behaviour of polymers around the glass transition temperature will be explored. The effects of strain rate, cooling or heating rate and other factors affecting the glass transition temperature will also be explained. Theory 1 When we extend or compress a polymer elastically (i.e. there is no permanent deformation), we try to move the chain-ends apart or together. For a simple polymer chain to change its , individual C-C bonds must twist from the *trans* to *gauche* position or vice versa - i.e. the torsion angles must change. This is a thermally activated process. At low temperatures, there is not enough thermal energy available to allow torsion angle changes, so the conformation becomes frozen in. The temperature above which the torsion angles can change is called the glass transition temperature. The changes in conformation also depend on time-scale, so the apparent value of Tg depends on the time-scale over which the behaviour is being monitored. The strain in a polymer is accommodated by the change in shape of the individual molecules, but it should be noted that the response of the bulk polymer is influenced by the interactions between the molecules. This affects the ability of the bonds to rotate, and also the viscosity of the bulk polymer. Therefore Tg depends on the polymer's architecture, and there are several factors influencing the transition: #### Chain Length Each chain end has some associated with it. A polymer with shorter chains will have more chain ends per unit volume, so there will be more free volume. Hence Tg*'* for shorter chains will be lower than Tg for long chains. Note that the shorter-chained polymer also has more free volume frozen in below Tg than the long-chained polymer. ![Graph of volume against temperature](images/image002.gif) #### Chain Flexibility A polymer with a backbone that exhibits higher flexibility will have a lower Tg. This is because the activation energy for conformational changes is lower. Therefore, conformational changes can take place at lower temperatures. #### Side Groups Larger side groups can hinder bond rotation more than smaller ones, and therefore cause an increase in Tg. Polar groups such as Cl, CN or OH have the strongest effect. #### Branching Polymers with more branching have more chain ends, so have more free volume, which reduces Tg, but the branches also hinder rotation, like large side groups, which increases Tg. Which of these effects is greater depends on the polymer in question, but Tg may rise or fall. #### Cross-linking Cross-linking reduces chain mobility, so Tg will be increased. It also affects the macroscopic viscosity of the polymer, since if there are cross-links between the chains, then they are fixed relative to each other, so will not be able to slide past each other. #### Plasticisers Small molecules, typically esters, added to the polymer increase the chain mobility by spacing out the chains, and so reduce Tg. ### Time Effects The properties of an amorphous polymer above Tg can change with time * At very short loading times the polymer can still be glassy because there is not time for the chains to move. * At intermediate times the polymer may be rubbery - i.e. chains can uncoil and recoil between entanglements, which remain stable. * At very long times, the chains can move past each other permanently, and so the polymer behaves as a viscous liquid. ![Graph of elastic modulus against temperature](images/image003.gif) For a useful rubbery material it is necessary to suppress chain sliding. One way of doing this is to increase the amount of cross-linking in the polymer. In order for the chains to slide the cross-linking bonds must first be broken. So increasing the number of cross-links decreases the chains' ability to slide over each other. This extends the rubbery region at higher temperatures, so the graph looks like this: ![Graph of elastic modulus agaist temperature](images/image004.gif) More about the slight rise in elastic modulus with increasing temperature in rubber is available in the . Theory 2 - kinetics vs thermodynamics = The formation of glasses can be understood from both a kinetic approach and a thermodynamic approach. In this section we will review both. ### Kinetic Approach to Glasses Consider a polymeric liquid being cooled towards its melting temperature. Once the temperature of the liquid reaches Tm the solid crystalline phase is thermodynamically favourable. In order for the liquid to undergo a phase transition to the solid state a two-step process must take place: 1. Nucleation of solid seeds 2. Growth of the seeds Nucleation is the formation of small crystalline solid particles in the liquid. As a result a new interface is formed in between the solid particle and the liquid. This interface has an associated energy, the interfacial energy. For a successful nucleation event to occur the polymer must find extra energy from somewhere to "pay" for the energy cost of creating a new interface. This energy is called the "driving force" for nucleation. As we cool the liquid just below Tm we see no nucleation at first. Instead nucleation occurs at a temperature T< Tm, below the expected temperature. When the polymer is still liquid below Tm it is said to be undercooled. The amount of "undercooling" is limited by the nucleation rate. ![Graph of temperature against nucleation rate](images/image009.gif) The second stage of the phase transition is the growth of the nucleated seeds. This is a thermally activated process, which means that its rate is dependent on the temperature. So for fast crystal growth the ideal method is to cool the polymer to just above Tg to allow for nucleation to occur and then raise the temperature to just below Tm to allow growth to occur. ![Graph of temperature against growth rate](images/image010.gif) Understanding these concepts we can now think about how best to cool a liquid in order to form a glass. To ensure we form a glass we want to reduce the number of nucleation events as much as possible (otherwise we will have crystals not an amorphous solid) and not allow growth to occur. By cooling the liquid very quickly (i.e. *quenching* it) it is possible to reduce the mobility of the molecules to the point where they can not move around to order themselves periodically as they do not have the energy to diffuse far enough. This concept can be expressed on a TTT (time temperature transformation) graph. ![Graph of temperature against time](images/image011.gif) The graph represents the degree of crystallinity of the polymer as it is cooled. The line marked "crystal" indicates a specific degree of crystallinity (for example it may indicate 90% crystallinity in the sample). The TTT graph is used for isothermal transformations, but here it is loosely applied to continuous cooling. If the cooling rate is fast enough then the polymer can be cooled so that it does not enter the crystal region of the graph. In this case, although the polymer is below Tm, it does not crystallise because the molecules can not move to order themselves periodically (as in a crystal). Instead the polymer has formed an amorphous solid, or a glass. ![Graph of temperature against time](images/image012.gif) ### Thermodynamic Approach to Glasses Consider the red line from the graph above. Above Tm the polymer is a liquid. At temperatures a long way below Tm the polymer behaves like a solid, although it is not crystalline. In this state the glass has the properties of a solid, but may exhibit aspects of liquid behaviour over long time scales. (There is a common misconception that inorganic glasses behave in this same way. This liquid behaviour is incorrectly thought to be responsible for the windows of old buildings often being thicker at the bottom than at the top. This is discussed briefly, with citations, for those who are interested on a .) Have a look at the graph of the polymer's enthalpy at different temperatures: ![Graph of enthalpy against temperature](images/image013.gif) If nucleation can occur and the rate of cooling is not too high, then the green line will be followed and the polymer will crystallise below Tm to form a crystalline solid. If the polymer is quenched so that ordering of the molecules cannot take place then the blue line will be followed and the polymer will form a glass below Tm. Let us now imagine what would happen if we could follow the red line and keep cooling the liquid without forming a crystalline solid. There is a point, TE, at which the enthalpy of the supercooled liquid falls below that of the crystalline solid. Similarly, there is a temperature, TK, the Kauzmann temperature, below which the entropy of the liquid would be less than that of the corresponding solid. In other words, below the Kauzmann temperature the liquid should be more ordered than the corresponding solid. As we know, solids are highly ordered and liquids are not, so the liquid cannot reach this condition. The paradox is avoided because by the time the Kauzmann temperature has been reached on cooling, the liquid has gone through transition into a glassy state. #### Important points relating to phase transitions within polymers * Glass formation can be explained both kinetically and thermodynamically * Glasses can be formed by quenching (rapidly cooling) the polymer so that the molecules do not have time to order themselves into a periodic crystal * Glasses exhibit the properties of a solid, but over long time scales can flow like a liquid * Below the Kauzmann temperature the polymer must either be a crystalline solid or a glass as the liquid phase is thermodynamically unstable Measurement of *T*g = There are several methods available to measure the glass transition temperature, some of which are given below. Since the value of the glass transition temperature depends on the strain rate and cooling or heating rate, there cannot be an exact value for Tg. ### Mechanical Methods It is possible to calculate a value for the glass transition temperature by measuring the elastic modulus of the polymer as a function of the temperature, for example by using a torsion pendulum. Around Tg there is a large fall in the value of the modulus. The frequency of the oscillation is important, since Tg depends on the time allowed for chain segment rotation. ![Graph of elastic modulus against temperature](images/image014.gif) A more common method is *dynamic mechanical thermal analysis* (DMTA), which measures the energy absorbed when a specimen is deformed cyclically as a function of the temperature, and a plot of energy loss per cycle as a function of temperature shows a maximum at Tg. ![Graph of energy lost against temperature](images/image015.gif) ### Thermal Methods As was shown in the , the enthalpy of a polymer decreases as the temperature decreases, but with a change in slope in the graph at Tg. Taking the derivative of this graph with respect to temperature, the specific heat capacity can be plotted, as below. The specific heat capacity, *C*p, can be measured using calorimetry, e.g. differential scanning calorimetry (DSC). The value of Tg depends on the heating or cooling rate. ![Graph of specific heat capacity against temperature](images/image016.gif) ### Volume Methods The changes in conformation that occur above Tg require more volume, so plotting a graph of specific volume or thermal expansion coefficient against temperature will give a value for Tg. The actual volume of the molecules stays the same through Tg, but the *free volume* (the volume through which they can move) increases. ![Graph of specific volume against temperature](images/image017.gif) ### Dielectric Constant If a varying electric field is applied to a polymeric material, any polar groups will align with the field. Below Tg rotation of the bonds is not possible, so the permittivity will be low, with a big increase around Tg. At higher temperatures the increased thermal vibrations cause the permittivity to drop again. If the frequency of the field is increased, the polar groups have less time to align, so the glass transition occurs at a higher temperature. ![Graph of permittivity against temperature](images/image018.gif) Demonstrations ### Bouncy Balls A bouncy ball is made of a polymer that is above its Tg at room temperature, with cross-linking keeping the ball in its spherical shape. The height that the ball rebounds to when dropped is dependent on how much energy is lost during the bounce. At room temperature the ball loses little energy when it deforms, so can rebound to a large fraction of its original height. As the temperature is reduced, the viscosity of the polymer increases, so more of the elastic strain energy is dissipated, and the ball does not bounce as high. ![Graph of energy lost against temperature](images/image015.gif) At Tg nearly all the energy is dissipated, and the ball barely bounces at all. As the temperature is reduced more to below Tg, there is not enough energy for conformational changes to occur, and the ball becomes glassy. The energy losses above Tg are due to the viscosity as the conformation changes and the polymer chains move past each other, and since these movements do not occur, the energy is not dissipated, so the ball bounces again. The videos show how high a bouncy ball bounces at different temperatures. The videos were filmed at 500 frames a second, and played back at 50 frames a second, so are at one tenth of actual speed. | | | | - | - | | Your browser does not support the video tag. Ball bouncing at 25°C, well above Tg | Your browser does not support the video tag. Ball bouncing at -50°C, just above Tg | | Your browser does not support the video tag. Ball bouncing at -70°C, close to Tg | Your browser does not support the video tag. Ball bouncing at -190°C, well below Tg |The second set of videos are close ups of the bounce itself, showing the deformation of the ball. The ball bounces both above and below Tg, although the process is very different. Above Tg the conformation of the polymer chains changes, and the deformation of the ball can clearly be seen. Below Tg, the conformation is frozen, so it is only the interatomic bonds that are strained, and the macroscopic deformation is much less, and is not observable in the videos. bouncing | | | | - | - | | Your browser does not support the video tag. Rubbery bounce, well above Tg | Your browser does not support the video tag. Bounce near Tg | | Your browser does not support the video tag. Glassy bounce, well below Tg |### Silly Putty The time dependence on the deformation of a material that exhibits a glass transition can be demonstrated with "Silly Putty". If a ball of putty is left at rest at room temperature, then it will slowly deform plastically under its own weight. However, if the ball of putty is hit with a hammer, again at room temperature, then it is possible to smash the putty into pieces. | | | | - | - | | Your browser does not support the video tag. Silly putty deforming under its own weight | Your browser does not support the video tag. Silly putty striking the ground | | Your browser does not support the video tag. Silly putty being hit by a hammer |The first video shows a small ball of putty (initially approximately 40 mm diameter) deform under its own weight at room temperature. The deformation took just over 40 minutes, so the film has been speeded-up 80 times. The height of the ball can be judged against the background. The initial deformation is much more rapid than towards the end, as the following numbers show: | | | | - | - | | Loss of height / mm | Time / mins:secs | | 5 | 1:10 | | 10 | 5:30 | | 15 | 25:00 | | 16 | 42:00 | The other two videos were both filmed at 1000 frames per second, and played back at 4 frames per second, so are at 1/250th of real speed. The second video is of a ball of putty hitting the ground after being dropped from a height of 1 m. The ball deforms, but returns to its original shape as it rebounds. When a ball of putty is dropped it rebounds to a large fraction of its original height. The third video is of a ball of putty being hit with a hammer, also at room temperature. The theory states that at fast strain rates the material will be glassy, and this is shown in this video, although it may not seem like it at first. The ball deforms quite a lot initially, and then breaks apart. This can be explained by the putty being glassy, with an elastic modulus low enough for there to be a visible strain before a critical strain is reached and brittle fracture occurs. Summary = You should now be familiar with the following concepts: * Above Tg polymers are rubbery, whereas below Tg they are glassy * Rubbery behaviour arises from the polymer's ability to change its conformation at high temperatures * Glassy behaviour arises from the polymer's lack of ability to change its conformation at low temperatures * Many factors affect the Tg of a polymer. Some factors are to do with the chemistry of the polymer: + chain length + chain flexibility + side groups + branching + cross-linking + the presence of plasticisersOther factors are to do with the method of measuring Tg: + strain rate + cooling or heating rate * You should also understand the theory of kinetics, which explains how glasses form and noted that there are many different experimental methods for measuring the glass transition temperature in polymers. * Finally you should have observed how the behaviour of a rubber ball varies when it is bounced above, below and near its Tg and seen how the behaviour of silly putty varies at different strain rates. Questions = 1. Explain why cooling a liquid very fast (quenching) can lead to the formation of a glass. Your answer should use what you have learnt about the kinetics of glass formation. 2. What is meant by the 'free volume' of a polymer? 3. What is the significance of the theoretical 'Kauzmann temperature'? 4. Why is the measured value of *T*g dependent on the method of measurement? 5. Using what you understand about the way polymer chains behave above and below *T*g, why do you think glasses tend to be brittle whereas rubbers are not? 6. What is meant by the conformation of a polymer? Using a Newman projection show what is meant by the 'trans' and 'gauche' states. 7. Explain how the dielectric constant of a polymer is affected by the transition to the glassy state. 8. Explain why the glass transition temperature is strain-rate dependent. 9. Below and above the *T*g of a rubber ball it will rebound after impact. At *T*g the ball will not rebound very much at all. Explain how the ball bounces in these three cases from an energetic point of view. Going further = ### Websites * Part of the based at the at the University of Southern Mississippi. * Includes video of a huge ball of silly putty being dropped from a tall building!
Aims The aim of this TLP is to provide an introduction to the static behaviour and flow behaviour of granular materials. Granular materials are seen in many forms in everyday life, such as in piles of powders, sands, soils and mineral ores and pastes such as toothpaste. Other materials such as cement and concrete are in paste form prior to their setting and hardening cement reactions. On completion of this TLP you should * Understand the concept of the angle of repose of piles of granular material and the factors which determine this angle. * Understand the ways in which soils fail and the importance of this knowledge in geotechnical design. * Understand the phenomenon of liquefaction. * Understand how the concept of dilatation explains the phenomenon of the surface drying under your feet when you walk on wet sand. Before you start There are no special prerequisites for this TLP, although there is a link to the TLP on in which the Coulomb yield criterion (also known as the Mohr-Coulomb criterion) is discussed. Introduction For a wide range of engineering materials, their processing requires that they are in powder form or granular form at some stage. For example, recycled plastics are routinely pelletised and nickel ore in granular form is routinely transported by ship containers across the Pacific Ocean from the Philippines, the largest exporting country of nickel ore, to China, where it is used in the manufacture of stainless steel. Soils are routinely piled into heaps during the construction of new roads and bridges and to conceal industrial plants. ![](images/Plasgran-plastic-recycling-pellets-2-1.jpg) Pelletised recycled plastic (image from: http://www.plasgranltd.co.uk/how-is-plastic-recycled/) It is vitally important to understand the conditions under which soil and heaps of material in granular form remain stable because the conditions under which they are no longer stable can have devastating consequences. For example, under suitably adverse conditions in the transport of bulk cargo such as nickel ore and iron ore in ship containers, the cargo can transform abruptly from a solid state to an almost fluid state, i.e., it can liquefy. If this occurs, the stability of the vessel transporting the cargo will be affected. The consequences of this can be dramatic – the ships structure can be damaged and, under severe conditions, the ship can capsize with the loss of life. Fortunately, there are now operational guidelines for the transport of mineral ores which will make the incidence of such catastrophic events much less in the future. Many countries around the world are susceptible to landslides and mudslides where, as a consequence of severe rainfall, the pore pressure in the soil rises so that the soil is unable to bear both an equal-all-round compressive spherical stress and any shear stress – the soil becomes a slippery mess. The consequences can be catastrophic to local communities with loss of life, unless areas at risk of mudslides and landslides can be evacuated beforehand. A third example is that of the stability of piles of granular material. It is not unusual to find such stockpiles failing suddenly, such as in the example below. ![](images/failure_pellet_stockpile.jpg) Failure of a pellet feed stockpile at the port of Vitoria in south-east Brazil (courtesy Evandro Moraes da Gama, Matheus Henrique de Castro, Carlos Gomes, Felipe Abbas da Gama, *Geomaterials*, **4**, 18-26 (2014)) Coal waste is also routinely piled into heaps. Again, under adverse circumstances, water can build up in the heap so that it is unable to bear both an equal-all-round compressive spherical stress and any shear stress. The Aberfan disaster in Wales () occurred after a prolonged period of heavy rain in October 1966, so that liquefaction occurred in the material in a coal heap adjacent to the village of Aberfan. 144 people died as a consequence of the collapse of this coal heap. Another way in which the flow behaviour of granular material is relevant in everyday life is when considering the nature of quicksand (). The two essential components of quicksand are a fluid, such as water, and fine particles, such as clay or fine sand. Central to its behaviour is its ability to liquefy, so that materials on top of it can sink into it without being fully submerged. Clearly, therefore, understanding the mechanical behaviour of granular material is of intense practical interest, as well as being of interest in its own right. Terzaghi's effective stress principle = Rocks disintegrate with weathering. In the processes of sedimentary geology when rivers discharge sediments into a pool, various grades of soil grain are deposited at different distances from the point of discharge. The ancient 'soft rock' termed the Cambridge gault clay in the city of  in England is a 20 m thick layer of silt or clay size soil grains, deposited on the bed of an ancient ocean. It now outcrops around , just west of Cambridge. Above it, when the ancient warm sea bed was far away from points where rivers were depositing sediment, coral could grow. This coral formed the chalk rock that extends far across Europe. Locally near Cambridge, it outcrops in the and along a line through , and . There are ripples in the chalk that rise up to form the and dip in the . This chalk rock forms the . Vertical cracks form when lateral pressure falls in this chalk. These cracks can leave a heavy cliff face resting on a foundation of gault clay that, when it fails, lets the cliff face fall into the English Channel. Anywhere that a sediment of strong durable grains accumulates as a uniform aggregate of sand or gravel that can be excavated for use in construction, such aggregate will be mixed with water and cement powder as a component of mortar in brickwork, or in structural concrete reinforced with mild steel bars. Where natural ground is soft enough, ungraded soil can be excavated, hauled to site and spread out in layers to be compacted with rollers to build up large road or dam embankments. In natural or compacted soft ground the strong soil grains form an effectively stressed aggregate structure in which forces are transmitted from grain to grain through the ground. In a volume of ground part of the volume is occupied by solid soil grains and the pore space between grains in saturated ground contains incompressible pore water. Whenever soft ground is loaded there will be pore pressure gradients. Terzaghi's primary consolidation theory analysed transient flow of ground water and surface settlements. Terzaghis effective stress principle applied to sand recognises that sand needs to have strength if a slope of sand is not to slump. A slope can only be stable if there are intergranular compressive stresses. The total compressive stress that is applied normal to a particular plane, σ, is equal to σ' + *u*, where σ' is an effective compressive stress normal to a plane and *u* is pore water pressure. This is Terzaghis effective stress principle. This principle is illustrated by the behaviour of sand in the two bottles shown on the section titled and also below. The air can easily move from pore space to pore space in the sand aggregate, but the water will take time to do so. These two bottles show that if there is time for the pore water pressure to diffuse (i.e., there is drainage), the same slope at repose is achieved with and without the presence of water. If there is not enough time, liquefaction will occur, as could happen in the rocking of a bulk carrier, i.e., a merchant ship designed to transport unpackaged bulk cargo such as metal ores. Ground settlement and centrifuge modelling experiments In a mud slide, pore pressure does not have time to drain because the grains of earth are small and the pathways for drainage between the grains are too narrow. Therefore, under suitably adverse conditions, gravity can cause a mud slide where there is soft clay and where structures for building such as a road embankment are built. In general with disturbed soil, as the pore water drains and the grains of earth are able to grow into close contact, settlement of the ground surface occurs over time. Where tunnels are constructed in soft clay beneath urban areas, there is the possible risk that above these tunnels, settlement of the ground surface can cause problems such as the partial or total collapse of buildings at street level. The engineering problem of ground settlement can be modelled at reduced scale and increased acceleration using a large centrifuge with a hopper which places a sand embankment on a clay model foundation. Since mathematically, settlement is a diffusion problem, the shearing in compression of the clay over time will take place in centrifuge experiments at a reduced time scale of 1/*n*2 where *n* in the multiple in terms of gravitational force *g* to which the soil is subjected in the experiments. Therefore, in a centrifuge experiment in which testing is undertaken at 100*g*, the reduction in time is a factor of 104. Hence, a suitably designed seven hour centrifuge experiment at 100*g* can enable reliable predictions to be made about ground settlement over a period equivalent to 8 years at ground level on the Earth under *g*. Angle of repose = When a container full of granular matter is poured onto a flat horizontal surface from a point source such as a funnel, it will form a conical pile of material. This conical pile will have a characteristic angle of repose, or equivalently, slope of repose. The angle of repose is the angle between the horizontal surface and the sloping surface of the pile. The tangent of this angle is the slope of repose. Your browser does not support the video tag. Your browser does not support the video tag. It is found experimentally that the angle of repose is determined by a number of factors. One obvious factor is friction, and a second is cohesion caused by the presence of liquid bridges between granules which enable the granules to stick together. Simple experiments with screw-top glass bottles half-full of fine grain loose sand are able to show various aspects of the flow behaviour of granular materials. In the videos below there are two bottles of sand, one half-full of sand and half-full of air, and the second half-full of sand and half-full of water. If, to begin, both bottles are rolled slowly on their sides on a horizontal table, the sand in both bottles will move to form packings of the granules with flat horizontal surfaces. If the bottle half-full of sand and half-full of air is tilted to stand on its end, it can be seen that the sand will form a slope with a particular angle of repose. If the bottle is further tilted in the same direction, the assembly of granules on the slope will be seen to flow in such a way that the individual granules *rotate* as they flow down the slope, rather than *slide down* the slope in order to maintain the angle of repose. If the bottle half-full of sand and half-full of water is tilted to stand on its end, the outcome depends on the timescale over which the tilt occurs. If the timescale is short, e.g., less than a second, the surface of the sand is disrupted in such a way that the granules pack down eventually to a horizontal surface so that the surface behaves as though it is a liquid. If the tilt occurs gradually, e.g., over a period of a few seconds, it can be seen that the slope attained by the sand when the bottled is tilted to stand on its end is the same within experimental error as the bottle half-full of sand and half-full of water. If the bottle is further tilted slowly, the assembly of granules on the slope behaves as they do in air. In clays such as London clay consolidated over some 50 million years (), large lumps of sheared blocks of clay known as ‘greasy-backs can fall from faces of tunnels during excavation. The bottom surfaces of these sheared blocks consists of regions of soil where local expansion has occurred and where the failure process on shearing is a consequence of the rotation of granules in these regions, as if they were tumbling down an angle of repose, rather than the translation of the granules. Examination of the surfaces of these greasy-backs after failure shows that they have the characteristics of plasticised clay paste resembling heavy engine grease (C.N.P. Mackenzie, Traditional Timbering in Soft Ground Tunnelling: A Historical Review, British Tunnelling Society, 2014). Predictions of the angle of repose Predicting the angle of repose for a particular family of granules is not straightforward: most angles of repose are acquired from experimental measurements, such as those quoted in the Wikipedia entry on ‘Angle of repose (). For many granular materials a typical angle of repose will be between 25° and 40°. For smooth spherical particles all of the same size, a very simple model can be used to appreciate how the packing of spheres in three dimensions has consequences for the stability of slopes. Suppose three identical spherical particles on a slope of angle θ relative to the horizontal pack together so that they form the base of a regular tetrahedron if a fourth identical sphere is added on top of these three particles. Depending on the orientation φ of the base of the tetrahedron with respect to the slope, the tetrahedron will be stable if the angle θ is increased from zero (when the particles are on a horizontal base) to a critical angle θc at which the fourth sphere rolls away down the slope from the three identical spheres (all presumed to be (just) held by friction on the slope at this angle). At a particular angle of φ and θ, simple geometry shows that the top sphere is only stable if the vector defining the gravitational force points through the projection of a suitable base triangle on the horizontal plane. The corners of this base triangle are defined by the positions of the centres of mass of the three identical spheres on the base of the tetrahedron. For a particular φ, the maximum angle of stability for this geometry is \[{\theta \_{\rm{c}}} = \arctan \frac{1}{{2\sqrt 2 \cos (60^\circ - \varphi )}}{\rm{ }}\] . Hence, when φ = 0°, when one of the corners of the base triangle is at the lowest point on the slope θc = 35.26°, and when φ = 60°, when one of the sides of the base triangle is at the lowest point in the slope, θc = 19.47°.A graph of θc as a function of φ is shown below. ![repose angle](images/repose_angle.jpg) If it is assumed that the orientation of bases of such tetrahedra are random on a pile of granular matter, then the observed angle of repose for smooth non-cohesive particles can be argued to be the average value of θc over the interval 0° < φ < 60°, i.e., 23.8°, determined numerically using Simpsons rule or the Trapezium rule. This equates to an effective coefficient of friction of tan θc of 0.44. Principle of normality The principle of normality follows from a detailed consideration of the yield surfaces of work-hardening materials and ideal plastic materials by Daniel Charles Drucker in 1951 in a paper entitled ‘A more fundamental approach to plastic stress strain relations, Proc. 1st U.S. Nat. Congr. Of Appl. Mech., pp. 487-491. In this paper Drucker established that these yield surfaces must be convex and that the vector sum of the plastic strain increments (or flow increments) at any point on the yield surface is normal to the yield surface. This is the principle of normality. To illustrate what this means in practice for an ideal plastic metal, in which hydrostatic stress does not cause plastic deformation (see ), we can consider the von Mises yield criterion in plane stress. In plane stress in principal stress space with principal stresses σ1 and σ2 and where σ3 = 0, the von Mises yield surface is defined by the equation \[\sigma \_1^2 + \sigma \_2^2 - {\sigma \_1}{\sigma \_2} = 1      (1)\] if the uniaxial yield stress, *Y*, is take to be unity (see ). At and beyond yield, the flow behaviour of ideal plastic metals is governed by the Lévy-Mises equations (see ): \[\frac{{\delta {\varepsilon \_1}}}{{{\sigma \_1} - \frac{1}{2}({\sigma \_2} + {\sigma \_3})}} = \frac{{\delta {\varepsilon \_2}}}{{{\sigma \_2} - \frac{1}{2}({\sigma \_3} + {\sigma \_1})}} = \frac{{\delta {\varepsilon \_3}}}{{{\sigma \_3} - \frac{1}{2}({\sigma \_1} + {\sigma \_2})}}      (2)\] for principal plastic strain increments δε1, δε2 and δε3 parallel to the principal stresses σ1 and σ2 and σ3 respectively. Examining the equation for the von Mises yield surface, the tangent at a point (σ1, σ2) on the yield locus can be found be differentiating equation (1) implicitly: \[2{\sigma \_1}{\rm{d}}{\sigma \_1} + 2{\sigma \_2}{\rm{d}}{\sigma \_2} - {\sigma \_2}{\rm{d}}{\sigma \_1} - {\sigma \_1}{\rm{d}}{\sigma \_2} = 0       (3) \] Rearranging this, \[\frac{{{\rm{d}}{\sigma \_2}}}{{{\rm{d}}{\sigma \_1}}} = - \frac{{2{\sigma \_1} - {\sigma \_2}}}{{2{\sigma \_2} - {\sigma \_1}}}       (4)\] Now, examining equation (2) for the situation where σ3 = 0, it follows that \[\frac{{\delta {\varepsilon \_2}}}{{\delta {\varepsilon \_1}}} = \frac{{2{\sigma \_2} - {\sigma \_1}}}{{2{\sigma \_1} - {\sigma \_2}}}      (5)\] so that \[\frac{{\delta {\varepsilon \_2}}}{{\delta {\varepsilon \_1}}}.\frac{{{\rm{d}}{\sigma \_2}}}{{{\rm{d}}{\sigma \_1}}} = - 1       (6)\] Hence, in words, the product of the gradient of the tangent at a point (σ1, σ2) on the yield surface and the gradient of the vector [δε1, δε2] defining the plastic flow increments in the (σ1, σ2) plane is minus one, i.e., these two gradients are perpendicular. Since the third plastic strain increment δε1 is parallel to the principal stress σ3, it follows that the vector sum of plastic flow increments [ δε1, δε2, δε3] is normal to the yield surface at (σ1, σ2) , i.e., the principle of normality is proved for the von Mises yield criterion. The animation below shows how in two dimensions in (σ1, σ2) space, the vector defining the plastic flow increments (in red) is normal to the von Mises yield surface (in black). The angle can be varied by moving the cursor in the bottom left of the animation. In the language of soil mechanics, the principle of normality is known as the associated flow rule (see, for example, A. Schofield, *Disturbed Soil Properties and Geotechnical Design*, Thomas Telford Ltd., London, 2005, p. 91). A further result which follows from the convexity of yield surfaces and this principle is that \[{\rm{d}}{\sigma \_{ij}}{\rm{d}}\varepsilon \_{\_{ij}}^{\rm{P}} \ge 0       (7)\] where dσij are stress increments created by an external agency which produce very small plastic strain increments in strain, dεijP (as well as elastic strain increments). In words, this is a statement that plastic work done by an external agency is always positive. Applying equation (7) to a triaxial stress test on a soil where *q* is the axial compressive stress, σa, minus the radial compressive stress, σr, and where *p'* is the mean effective compressive stress generates the condition \[{\rm{d}}p'{\rm{d}}v + {\rm{d}}q{\rm{d}}\varepsilon \ge 0       (8)\] where d*v* is the incremental change in the specific volume of the soil aggregate and where dε is the incremental change in pure distortion. The specific volume is the ratio of the volume occupied by soil to that which it would occupy if the voids were eliminated. In soil mechanics the convention is that d*p*', d*q* , d*v*  and dε are all positive. Equation (8) is relevant to the of how soils at the critical state and on the ‘wet side of the critical state fail. Yielding of disturbed saturated soils in triaxial stress tests A schematic yield surface ABCDEFG of a soil plotted as a function of two parameters: η and vλ is shown in the diagram below. The vertical axis is η, the stress obliquity, defined as q/p', where in a triaxial test on a cylinder of soil, q is the axial compressive stress, σa, minus the radial compressive stress, σr, and p' is the mean total compressive stress, p, minus the pore water pressure, u. p' is also known as the mean effective compressive stress. It is useful to note that the effective axial and compressive stresses are σa' = σa − u and σr' = σr − u respectively, and that \[ q = {\sigma \_{\rm{a}}} - {\sigma \_{\rm{r}}} \]and \[ p' = \frac{{{\sigma \_{\rm{a}}} + 2{\rm{ }}{\sigma \_{\rm{r}}}}}{3} - u \] The convention in soil mechanics is to take compressive stresses as positive quantities. This is an important difference from the convention used in the TLP . ![](images/yielding1.gif) The horizontal parameter vλ is a linear function of the specific volume, v, and the logarithm of the mean effective compressive stress, p'. The specific volume is the ratio of the volume occupied by soil to that which it would occupy if the voids were eliminated. Therefore, values of v of 2 are typical for soils. We now need to define what we mean by the 'Dry' side of the critical state and the 'Wet' side of the critical state for soils. 'Wet' means that when you shear the soil, grains expel water so that the soil feels sticky to the touch. 'Dry' does not literally mean dry: it means the pores are full of water, but when you shear the soil, the grains ride over one another - this is called dilation. The void space increases and fluid is taken into the soil. Remoulding the soil in your hands will dry your hands. At the critical state, the soil neither dries your hands nor feels sticky to the touch. For soils at the critical states C and E in the above diagram and soils on the 'wet' side of C and E, it is found experimentally that there is a linear relationship between the specific volume of the soil aggregate, v, occupied by the soil and the logarithm of the mean effective compressive stress. Hence, a graph of v against ln p' for such soils is experimentally found to fit an equation of the form \[ v = {v\_\lambda } - \lambda \ln p' \] where by convention p' is usually normalised with respect to a standard stress of 1 kPa. Micromechanically, such an equation in v − ln p' space can be rationalised in terms of particle breakage as p' increases in crushable aggregates, and the fractal nature of the process of breakage in such aggregates, in which the tensile strength of the smallest particles within an assembly of particles constituting an aggregate of soil grains determines the yield stress of the aggregate (G.R. McDowell and M.D. Bolton, ‘On the micromechanics of crushable aggregates, *G**éotechnique* **48**, 667-679 (1998)). Physically, it can be appreciated that mud saturated with water has a high v relative to the same mud with water squeezed out of it as a consequence of an increase in p', and that there will be a lower asymptotic limit of v achieved by mud at large mean effective compressive stresses with the water squeezed out. Schematically, we might imagine the following: ![graph of v against p'](images/yielding2.gif) Each part of this yield surface in (vλ, η) space will now be explained in more detail, going clockwise around the yield surface, starting with A and finishing with G: Line AB - When σa >> σr, η → 3, as σr/σa → 0. Under these circumstances, axial compression will cause a test cylinder of soil to split on axial planes, just as logs split for firewood. Open cracks are therefore produced in this part of (vλ, η) space. Curve BC In this region of (vλ, η) space the Mohr-Coulomb failure criterion is assumed to hold: \[{\tau} = \tau ^\* + {\sigma'\_{n}}\tan \phi' \] where in this equation the compressive stress \({\sigma'\_{n}}\) is taken here to be positive using the conventions of soil mechanics, and where the dash denotes that pore water pressure has been taken into account, so that \({\sigma'\_{n}}\)  is an effective normal compressive stress. This criterion is discussed further on the TLP page . For the interpretation of this criterion for triaxial stress tests on soils using the convention that in soil mechanics, pressures are taken to be positive, it is reasonable to map Mohr-Coulomb behaviour in (τ,\({\sigma'\_{n}}\) ) space onto (q, p') space so that a graph of q against p' is taken to be of the form ![](images/yielding3.gif) \[ q = q \_0 +m{p'}      (1)\] . There are a number of caveats in making the assumption that the Mohr-Coulomb behaviour holds. The most important caveat is that the Mohr-Coulomb criterion does not have a physical base – it is an empirical equation based on the interpretation of experimental observations. In addition, in the Mohr-Coulomb yield criterion the intermediate principal stress is not required, whereas it clearly is required for the interpretation of triaxial stress tests. Notwithstanding these caveats, rearranging (1), we have: \[ \eta = \frac{q}{{p'}} = \frac{q\_0}{{p'}} + m      (2)\] If we now define ln p' to be x, then p' = ex, and so (2) can be rearranged in the form \[ \eta = q\_0{e^{ - x}} + m      (3) \] Hence, a graph of η against x has a gradient of \( - q\_0{e^{ - x}}\), so that the gradient is negative, decreasing in magnitude exponentially as x increases. Hence plotting η against v + λ the Mohr-Coulomb yield criterion has the same form, since v is taken to be a constant and λ is a dimensionless constant. Eventually, as ln p' increases while η decreases, a critical state C is reached where physically the soil is sufficiently loosely packed that there is no specific volume change during a triaxial stress test, i.e., the soil neither contracts nor dilates. Hence, there is a critical value of vλ at C. This critical value of vλ is given the symbol Γ. During both clockwise and anticlockwise shearing, soils dilate for vλ < Γ, i.e., on the dry side of critical and contract for vλ > Γ, i.e., on the wet side of critical. Line CD - Line CD represents the behaviour of soft soil during plastic yielding and flow at C, and on the wet side of the critical state C, through the . The proof that CD is a straight line can be found . As shown , C is defined by a point (Γ, M) in (vλ, η) space, while, as will also be shown below, D is defined by a point (Γ + λ - κ, 0). In (vλ, η) space, CD is the straight line \[ (\lambda - \kappa ){\rm{ }}{\eta \_{\rm}} = M{v\_{\lambda {\rm}}} - M(\Gamma + \lambda - \kappa ) \] It is useful to have an idea of the values of some of the parameters in this model. For a material like London Clay, M = 0.89, λ = 0.161, κ = 0.062 and Γ is 2.759 when p'C is 1 kPa (A. Schofield, *Disturbed Soil Properties and Geotechnical Design*, Thomas Telford Ltd., London, 2005, p. 100). Line DE - Line DE is the equivalent of line CD, i.e., Original Cam Clay yield behaviour but for when η < 0, i.e., for when σa << σr. The critical state E occurs at a position (Γ, ME) in (vλ, η) space where there is no physical reason for ME and M to be the same. Curve EF Line EF is the equivalent of curve BC, i.e., the yield surface is defined by Mohr-Coulomb failure, but under circumstances where η < 0. At F a stress state is reached where σa/σr = 0. Line FG - When σa << σr, η → −1.5 is reached as σa/σr → 0. Under these circumstances, radial compression causes cracking in planes perpendicular to the axis of a test cylinder: it can crack into many discs. This is termed spalling – it has the same nomenclature as that used to describe the failure in compression of thin films on substrates because of failure due to in-plane biaxial compressive stresses. Finally, note that F and B occur at different values of vλ: spalling occurs over a larger range of mean normal effective pressures than its equivalent process of cracking when σa >> σr. Original Cam-Clay model =The original Cam-clay model (OCC) was developed by Andrew Schofield in the 1960s as a description of the behaviour of saturated soil and sands. It shows how, depending on water content, soils can fail by spalling or by plasticity and liquefaction. ![water saturated sand cylinder](images/water_saturated_sand.jpg) ![triaxial testing diagram](images/triaxial_testiing.jpg) Consider a cylinder of water-saturated sand in a triaxial testing regime as in the above figure. The cylinder is subjected to the total axial stress, σa, and total radial stress, σr. It is more useful to work in terms of the effective stress which takes pore water pressure, u, into account. \[\sigma ' = \sigma - u       (1)\] This allows us to define the general mean effective compressive stress as \( p' = \frac{1}{3}(\sigma\_{1}' + \sigma\_{2}' + \sigma\_{3}') \). In this case σa = σ1 and σr = σ2 = σ3 and so \[ p' = \frac{1}{3}(\sigma\_{a}' + 2\sigma\_{r}')      (2)\]         A deviator stress, *q*, can also be defined, given by the equation \[q = {\sigma\_{a}'} - {\sigma\_{r}'}       (3)\] In triaxial yield testing it is found that sands and soils on the ‘wet side of the critical state yield on a ductile-plastic continuum. The plastic deformations arise as a change in the specific volume of the sample and a strain along its length. We can therefore define the following two strains: Axial strain: \[\delta {\varepsilon \_a} = \frac{{\delta l}}{l}       (4)\]       Volumetric strain:\[\frac{{\delta v}}{v} = {\varepsilon \_v} = \delta {\varepsilon \_a} + 2\delta {\varepsilon \_r}       (5)\] To find the triaxial shear strain, εs, we separate the axial and volumetric strains: \[\begin{aligned} \delta {\varepsilon \_s} = \delta {\varepsilon \_a} - \frac{1}{3}\delta {\varepsilon \_v} \\ = \frac{2}{3}{\delta {\varepsilon \_a} - \delta {\varepsilon \_r}} \end{aligned}       (6)\] The work done per unit volume by elastic straining is \[\begin{aligned} \delta W & = p'\delta {\varepsilon \_v} + p\delta {\varepsilon \_s}\\ & = \frac{1}{3}{\sigma \_{a}' + 2\sigma \_{r}'} {\delta {\varepsilon \_a} + 2\delta {\varepsilon \_r}} + \frac{2}{3} {\sigma \_{a}' - \sigma \_{r}'} {\delta {\varepsilon \_a} - \delta {\varepsilon \_r}}\\ & = \sigma \_{a}'\delta {\varepsilon \_a} + \sigma \_{r}'2\delta {\varepsilon \_r} \end{aligned}       (7)\] For work done by plastic straining at failure for soils at the critical state or wetter than the critical state, the relevant dissipation function is defined by the equation \[p'\delta {\varepsilon \_v} + q\delta {\varepsilon \_s} = \delta W = Mp'\delta {\varepsilon \_s}       (8)\] where Μ is the general coefficient of friction. The work done against friction per unit volume is defined by this equation. The work done in producing a volume change does not explicitly appear in this model because it is a consequence of the interlocking of particles. In OCC the associated plastic flow vector is locally orthogonal to the tangent of the yield locus so that \[dp'\delta {\varepsilon \_v} + dq\delta {\varepsilon \_s} \ge 0        (9)\] In words, this equation is a recognition that the scalar product of the plastic flow normal to the yield locus at (*p*', *q*) and the incremental loads (*dp*', *dq*) causing failure at (*p*', *q*) must be positive. To derive the OCC we combine Equations (8) and (9). First we divide equation (8) by pδεs : \[\frac{{\delta {\varepsilon \_v}}}{{\delta {\varepsilon \_s}}} + \frac{q}{{p'}} = M\] Rearranging Equation (9) after setting the inequality to zero, we have: \[\frac{{\delta {\varepsilon \_v}}}{{\delta {\varepsilon \_s}}} = - \frac{{dq}}{{dp'}}\] so that eliminating δεv/δεs in these two equations, we produce the equation \[\frac{q}{{p'}} - \frac{{dq}}{{dp'}} = M         (10) \] This is a differential equation on which we impose limits and introduce the stress ratio, η = q/p. Differentiating η: \[\frac{{d\eta }}{{dp}} = \frac{1}{{p'}}\left( {\frac{{dq}}{{dp'}} - \frac{q}{{p'}}} \right) = - \frac{M}{{p'}}\]  \[ ⇒ \frac{d\eta}{M} + \frac{dp'}{p'} = 0               (11) \] As we are finding the locus for ‘wetter than critical states the integral is as follows:  \[ \begin{aligned} \int\_{M}^{\eta} \frac{1}{M}\,d\eta &=-\int\_{p'\_c}^{p'}\frac{1}{p'}\,dp' \\ \frac{\eta}{M}-1&=-ln\left(\frac{p'}{p'\_c}\right) \\ \frac{q}{Mp'} =& 1 - ln\left(\frac{p'}{p'\_c}\right) \end{aligned} \]                 Therefore, when *p*' is equal to *p*'c, η= *M*. Also, when *q* = 0, so that the soil cannot withstand any shear stress at failure, *p*' = e *p*'c ,where e is the base of the natural logarithm, 2.71828 … . In practice *q* = 0 is actually a situation difficult to attain – see A. Schofield, *Disturbed Soil Properties and Geotechnical Design*, Thomas Telford Ltd., London, 2005, p. 106. Liquefaction The phenomenon of soil liquefaction describes failure of saturated or partially saturated soil that, upon application of an applied stress, behaves as a liquid when it fails. In the real world these failures usually result from earthquakes leading to landslides. It was proposed by Casagrande that grain rotation can result in a change from an interlocking grain structure to a flowing grain structure. He tested this hypothesis by applying an impact force to a tank filled with loose sand. The aggregate remained solid until the instant the force was applied. The application of the force caused a high transient pore pressure to be generated within the aggregate, reducing the effective stress to near zero, so that liquefaction flow occurred. ![clean rock image](images/Clean rock image.png) A large body of soil is not safe from liquefaction unless it is compacted below the critical void value. This is due to the pore pressure changes during shear. Casagrande studied this using triaxial compression tests. He showed how with shear at a constant normal stress the peak strength (starred point) of dense sand increased as it expands. Loose sand showed a fall in porosity and dense sand showed an increase. After the displacement both loose and dense sand have reached a peak porosity n0 where the sand flows under constant shear stress. ![Casagrande plot](images/Casagrande 1.png) ![Casagrande plot 2](images/Casagrande 2.png) This critical porosity n0 is independent of the effective pressure. If silty sand is compacted below n0, it will not be at risk of liquefaction. Compression with drained shear is the most effective way of reducing the risk of liquefaction of an existing sample as the water content is also reduced. ![Casagrande and Cs image](images/Casagrande and CS.png) In the left-hand graph above, the dashed line BC represents a drained test and the solid line BD an undrained test. The double horizontal line in the left-hand graph is the critical void ratio n0. In the graph on the right, the double line at an angle to both the horizontal and the vertical is a *λ*-line representing soil at critical state, CS, as a function of the specific volume of aggregate, *v*, and the logarithm of the effective compressive stress *p'* applied to the aggregate relative to a standard compressive stress state of 1 kPa. For the critical void ratio theory, compression with large deformation depends on the initial state, while in CS theory an aggregate of grains does not retain any record of its initial structure. The states predicted under Casagrandes model differ from those predicted by the line AH in CS theory. Under CS theory the dashed line still follows the same path BC but the solid line instead stops short at BK and does not continue from K to D. In 1975 Casagrande revised his theory of constant critical porosity to adopt the CS line which combines specific volume v and effective pressure p. The slope of the double *λ*-line in CS is greater than the slope of the elastic κ compression lines, shown in the right-hand graph as lines parallel to FH. The bold arrows marked **z** show the main differences between the two initial theories. In the Casagrande model, the vector **z** shows that the reduction of pressure is unsafe as it results in a swelling of the aggregate. This moves from a safe dilative state to an unsafe contractive state. There is a likelihood of liquefaction in the contractive state. In the CS theory, the vector **z** shows that increasing the effective pressure moves the aggregate from a safe dilative state to an unsafe contractive state. This means that even very dense sand can liquefy if the effective pressure is high enough; this is especially relevant for aggregates with a high water content. Concerns of liquefaction in the real world As well as natural disasters there are other situations where liquefaction needs to be taken into account. Liquefaction in bulk cargo carriers has a catastrophic impact on ship stability that can result in large losses of life. Nickel ore is one of the most dangerous granular materials to transport due to the fine grain size and moisture content that sits in the pores. There are many precautions and safety checks before a cargo is approved to leave port. Before the cargo experiences compaction at sea it appears in a solid dry state due to frictional forces between grains. ![Liquefaction](images/liquefaction.png)   During transport, the motions of a ship on the waves cause compaction of the cargo so that the pore size between the grains is reduced. Since the grains themselves can be very fine, there can be limited permeability, and so very little water drainage can occur. Once the pore water has been compressed, the pressure will start to push the grains apart, causing the grains no longer to be in contact, so the surface friction is lost. The shear strength of the cargo is reduced to zero and so it flows out to a level surface when liquefaction has occurred. Fortunately, this is often a transient phenomenon because the cargo can settle back into a more dense state. Some nickel ores have a clay-like consistency. In these materials liquefaction is instead a fatigue process where after a certain number of stress cycles the material will fail as the cohesion of the material collapses. This can result in the entire cargo becoming liquid simultaneously. As the cargo does not often fill the entire hold, there is sufficient space for the liquefied material to flow and destabilise the ship. Once this has occurred, it is very difficult to regain stability and it can result in the capsize of a ship in a matter of minutes. Wet sand drying underfoot = When walking on wet sand, you may have noticed that the surface goes dry under your feet. This is not due to water being squeezed out into the surrounding sand. Instead, the water is absorbed into the sand directly under your feet. This is because you are applying a pressure to the sand under your feet as you walk. When pressure is applied to this sand under your feet, water flows into the pores that have expanded due to the relative shearing of sand particles, as shown in the diagram at the top of this page. Once your foot has been removed, the disturbed grains will settle back into a more dense arrangement and the water will be pushed out of the now smaller pores, and the sand appears wet again. Your browser does not support the video tag. Video provided with kind permission of Ruben Meerman, 'The Surfing Scientist'. The video is also available directly from YouTube: Summary = After working through this teaching and learning package, you should: * Understand the concept of dilatation * Have a feeling for 'typical' angles of repose for granular materials * Recognise how friction, cohesive forces and the shapes of grains all affect the angle of repose of a pile of granular material * Be able to appreciate the various ways in which soil can fail in compression and how the choice of failure mode is a function of the water content of the soil * Be able to recognise the factors which cause soil to fail catastrophically and flow like a liquid – the phenomenon of liquefaction * Recognise that quicksand is dangerous, but that if you ever happen to fall into quicksand, you can reassure yourself that you will not be sucked beneath the surface. Questions = ### Quick questions*You should be able to answer these questions without too much difficulty after studying this TLP. If not, then you should go through it again!*1. Which of these factors do not influence the angle of repose taken up by a pile of granular particles? | | | | | - | - | - | | | a | Shape | | | b | Friction | | | c | Density | | | d | Cohesion | 2. Which of the following systems of granular materials exhibit dilatancy? | | | | | - | - | - | | | a | Soil which when picked up from the ground is dry to the touch | | | b | Tomato ketchup | | | c | Sparkly nail polish | | | d | Soil which when picked up from the ground is wet to the touch | 3. Inspired by this TLP, you purchase some corn starch and mix it with water in the ratio by volume of 1 part water to 2 parts corn starch to make a thick paste of corn starch which you pour into a large plastic container. The surface of the corn starch is then hit quickly with a rubber mallet so that the time of contact with the surface is 100 milliseconds or so. What happens to the surface of the corn starch? | | | | | - | - | - | | | a | The rubber mallet bounces back from the surface. Everything happens too quickly to work out what is going on! | | | b | Water retreats from where the impact is made by the rubber mallet because of the phenomenon of dilatancy, so that in this region the surface goes from shiny to matt after the hammer has struck. The rubber mallet causes a small impression in the corn starch which over time gradually disappears as water returns. Eventually, the surface returns to being shiny and it is difficult to determine where the rubber mallet had struck. | | | c | The rubber mallet causes a mess everywhere because it has gone deep into the paste initially, causing paste to be ejected from the surface into the surrounding environment, and has then been forcibly removed. | | | d | The surface of the cornstarch is lowered where the rubber mallet has struck, but it returns to its initial state quickly. There is no change to the shininess of the surface anywhere. |### Deeper questions*The following questions require some thought and reaching the answer may require you to think beyond the contents of this TLP.*4. Ginkaku-ji ('The Silver Pavilion') is a Zen temple in Kyoto, Japan renowned for its meticulously maintained raked dry sand garden called Ginshadan known as the 'sea of silver sand' next to which there is a massive 1.8 m high sand cone platform called Kogetsudai said to symbolise Mount Fuji.![](../granular_materials/images/q4img.jpg)The 'angle of repose' of Kogetsudai is impressively high, as are the angles of repose in Ginshadan. Which of the following are the most likely explanations for how these angles of repose are maintained? * a   The sand is highly angular, very cohesive and has a high coefficient of friction because it is unusually rough to enable the sand particles to form such steep structures. * b  &nbspMoisture, e.g., water, is added to the dry sand daily to enable the structures to be maintained, just like the shapes of sandcastles can be maintained by adhesive forces generated by capillarity between grains. * c  &nbspThe dry sand contains some cementitious binding agent such as cement, gypsum mortar or lime mortar cunningly disguised in the white sand. * d  &nbspThe custodians of such dry sand gardens have secrets handed down to them by their predecessors over the years which science has yet to establish. 5. Soil in an embankment is made progressively more wet by unusually heavy rain. What measures should be in place during the construction of the embankment and its subsequent maintenance to reduce the risk of possible failure of this embankment? Going further = ### Books * A. Mehta*, Granular Physics*, CUP, 2007. * A. Schofield, *Disturbed Soil Properties and Geotechnical Design*, Thomas Telford Ltd., London, 2005. * A. Schofield and C.P. Wroth, *Critical State Soil Mechanics*, McGraw-Hill, Maidenhead, 1968. * D.M. Wood, *Soil Behaviour and Critical State Soil Mechanics*, CUP, 1990. ### Websites * and * are both useful web sites to help to go further into the subject of granular flow and to cover aspects of granular materials not covered in detail in this TLP, such as how forces are transmitted between particles in granular media. ### Other resources For those interested in going much deeper into the subject of granular flow, these research references should be useful: R. Albert, I. Albert, D. Hornbaker, P. Schiffer and A. Barabási, ‘Maximum angle of stability in wet and dry spherical granular media, *Phys. Rev. E* **56**, R6271-R6274 (1997). A. Barabási, R. Albert, and P. Schiffer, ‘The physics of sand castles: maximum angle of stability in wet and dry granular media, *Physica A* **266**, 366-371 (1999). M.D. Bolton, ‘The strength and dilatancy of sands, *G**éotechnique* **36**, 65-78 (1986). H.W. Chandler and D.E. Macphee, ‘A model for the flow of cement pastes, *Cem. Conr. Res*. **33**, 265-270 (2003). B.-P. Dai, J. Yang and C.-Y. Zhou, ‘Micromechanical origin of angle of repose in grnular materials, *Granular Matter* **19**: art. 24 (2017). P.A. Arias García, R.O. Uñac, A.M. Vidales and A. Lizcano, ‘Critical parameters for measuring angles of stability in natural granular materials, *Physica A* **390**, 4095-4104 (2011). T.C. Halsey and A.J. Levine, ‘How sandcastles fall, *Phys. Rev. Lett*. **80**, 3141-3144 (1998). B.C. Johnson, C.S. Campbell and H.J. Melosh, ‘The reduction of friction in long runout landslides as an emergent phenomenon, *J. Geophys. Res.: Earth Surf*. **121**, 881-889 (2016). A. Khaldoun, E. Eiser, G.H. Wegdam and D. Bonn, ‘Liquefaction of quicksand under stress, *Nature* **437**, **635** (2005). A. Mehta and G.C. Barker, ‘The dynamics of sand, *Rep. Prog. Phys*. **57**, 384-416 (1994). C.N.P. Mackenzie, Traditional Timbering in Soft Ground Tunnelling: A Historical Review, British Tunnelling Society, 2014. G.R. McDowell and M.D. Bolton, ‘On the micromechanics of crushable aggregates, *G**éotechnique* **48**, 667-679 (1998). G.R. McDowell, M.D. Bolton and D. Robertson, ‘The fractal crushing of granular materials, *J. Mech. Phys. Solids* **44**, 2079-2102 (1996). S. Nowak, A. Samadan and A. Kudrolli, ‘Maximum angle of stability of a wet granular pile, *Nature Physics* **1**, 50-52 (2005). D.A. Robinson and S.P. Friedman, ‘Observations of the effects of particle shape and particles size distribution on avalanching of granular media, *Physica A* **311**, 97-110 (2002). C.M. Sands, A.R. Brown and H.W. Chandler, ‘The application of principles of soil mechanics to the modelling of pastes, *Granular Matter* **13**, 573-584 (2011). Z.Y. Zhou, R.P. Zou, D. Pinson and A.B. Yu, ‘Angle of repose and stress distrbution of dandpiles formed with ellipsoidal particles, *Granular Matter* **16**, 695-709 (2014).
Aims On completion of this TLP you should: * be able to describe how the Jominy test is conducted and how the information that it provides is obtained and presented * be able to describe the general effects of alloying and prior heat treatment on the jominy test results Introduction The Jominy end quench test is used to measure the hardenability of a steel, which is a measure of the capacity of the steel to harden in depth under a given set of conditions. This TLP considers the basic concepts of hardenability and the Jominy test. Knowledge about the hardenability of steels is necessary to be able to select the appropriate combination of alloy steel and heat treatment to manufacture components of different size to minimize thermal stresses and distortion. The Jominy end quench test is the standard method for measuring the hardenability of steels. This describes the ability of the steel to be hardened in depth by quenching. Hardenability depends on the chemical composition of the steel and also be can affected by prior processing conditions, such as the austenitizing temperature. It is not only necessary to understand the basic information provided from the Jominy test, but also to appreciate how the information obtained can be used to understand the effects of alloying in steels and the steel microstructure. Hardenability = Hardenability is the ability of a steel to partially or completely transform from austenite to some fraction of martensite at a given depth below the surface, when cooled under a given condition. For example, a steel of a high hardenability can transform to a high fraction of martensite to depths of several millimetres under relatively slow cooling, such as an oil quench, whereas a steel of low hardenability may only form a high fraction of martensite to a depth of less than a millimetre, even under rapid cooling such as a water quench. Hardenability therefore describes the capacity of the steel to harden *in depth* under a given set of conditions. Steels with high hardenability are needed for large high strength components, such as large extruder screws for injection moulding of polymers, pistons for rock breakers, mine shaft supports, aircraft undercarriages, and also for small high precision components such as die-casting moulds, drills and presses for stamping coins. High hardenability allows slower quenches to be used (e.g. oil quench), which reduces the distortion and residual stress from thermal gradients. Steels with low hardenability may be used for smaller components, such as chisels and shears, or for surface hardened components such as gears. Hardenability can be measured using the Jominy end quench test. Jominy end quench test The test sample is a cylinder with a length of 102 mm (4 inches) and a diameter of 25.4 mm (1 inch). ![Photograph of a Jominy test specimen](images/jom_spec-s.jpg) Jominy test specimen The steel sample is *normalised* to eliminate differences in microstructure due to previous forging, and then *austenitised*. This is usually at a temperature of 800 to 900°C. The test sample is quickly transferred to the test machine, where it is held vertically and sprayed with a controlled flow of water onto one end of the sample. This cools the specimen from one end, simulating the effect of quenching a larger steel component in water. ![Photograph of a Jominy test machine](images/jom_test-s.jpg) Jominy test machine The cooling rate varies along the length of the sample from very rapid at the quenched end, to rates equivalent to air cooling at the other end. ![Photograph of end quenching and graph of hardness against distance](images/jominy.gif) The round specimen is then ground flat along its length to a depth of 0.38 mm (15 thousandths of an inch) to remove decarburised material. The hardness is measured at intervals from the quenched end. The interval is typically 1.5 mm for alloy steels and 0.75 mm for carbon steels. High hardness occurs where high volume fractions of martensite develop. Lower hardness indicates transformation to bainite or ferrite/pearlite microstructures. | | | | - | - | | Micrograph of martensite (low alloy steel) | Micrograph of ferrite/pearlite (low alloy steel) | | Martensite | Ferrite/pearlite | Jominy end quench hardness data for two steels of different hardenability can be seen in a of this TLP, with images of the microstructure variation along the length of the sample. Similar tests have been developed in other countries, such as the SAC test, which uses a sample quenched from all sides by immersion in water. This is commonly used in the USA. Video clips of the Jominy test procedure ### Video clip 1: Transferring the sample from furnace to quenching machine ![Photograph of sample being lowered into Jominy machine](images/jominy1.jpg)The specimen is suspended from a wire and held in a furnace to austenitise the microstructure at around 900°C. It is then carefully and quickly moved to the quenching machine and positioned above a water jet.The water jet is started and sprayed onto the bottom of the specimen until the specimen is cool. Your browser does not support the video tag. Transferring the sample from furnace to quenching machine### Video clip 2: Quenching the sample ![Photograph of water jet quenching end of sample](images/jominy2.jpg)As the water jet sprays onto the end of the hot, glowing specimen, a cold dark region spreads up the specimen. The cold region has transformed from austenite to a mixture of martensite, ferrite and pearlite. The proportions of the phases at any position depends on the cooling rate, with more martensite formed where the cooling rate is fastest. Ferrite and pearlite are formed where the cooling rate is slower. Your browser does not support the video tag. Quenching the sample### Video clip 3: Jominy end quench test ![Photograph of sample being removed from furnace](images/jet.jpg) This alternative longer video clip (contributed by Oxford Brookes University) shows both the transfer of the sample from furnace to Jominy machine, and the jet spraying one end of the sample. Your browser does not support the video tag. Jominy end quench test Uses of Jominy data: Measurement of hardenability = Data from the Jominy end quench test can be used to determine whether a particular steel can be sufficiently hardened in different quenching media, for different section diameters. For example, the cooling rate at a distance of 9.8 mm from the quenched end is equivalent to the cooling rate at the centre of an oil-quenched bar with a diameter of 28 mm. Full transformation to martensite in the Jominy specimen at this position indicates that a 28 mm diameter bar can be *through hardened*, i.e. hardened through its full thickness. A high hardenability is required for through hardening of large components. This data can be presented using CCT (**C**ontinuous **C**ooling **T**ransformation) diagrams which are used to select steels to suit the component size and quenching media. Slow quenching speeds are often chosen to reduce distortion and residual stress in components. ![CCT diagram for an alloy steel](images/cct.gif) CCT diagram for an alloy steel Slower cooling rates occur at the core of larger components, compared to the faster cooling rate at the surface. In the example here, the surface will be transformed to martensite, but the core will have a bainitic structure with some martensite. Uses of Jominy data: Effects of alloying and microstructure = Jominy end quench test can also be used to demonstrate the effects of microstructure and alloying variables on the hardenability of steels. These include alloying elements and grain size. #### Alloying elements. The main alloying elements which affect hardenability are carbon, boron and a group of elements including Cr, Mn, Mo, Si and Ni. **Carbon** Carbon controls the hardness of the martensite. Increasing the carbon content increases the hardness of steels up to about 0.6wt%. At higher carbon levels, the formation of martensite is depressed to lower temperatures and the transformation from austenite to martensite may be incomplete, leading to retained austenite. This composite microstructure of martensite and austenite gives a lower hardness to the steel, although the microhardness of the martensite phase itself is still high. ![Graph of hardness against % carbon content](images/retained.gif) Effect of carbon content (wt%) on hardness Carbon also increases the hardenability of steels by retarding the formation of pearlite and ferrite. However, the effect is too small be be commonly used for control of hardenability. High carbon steels are prone to distortion and cracking during heat treatment, and can be difficult to machine in the annealed condition before heat treatment. It is more common to control hardenability with other elements, and to use carbon levels of less than 0.4wt%. **Boron** Boron is a very potent alloying element, typically requiring 0.002 to 0.003wt% to have an equivalent effect as 0.5wt% Mo. The effect of boron is also independent of the amount of boron, provided sufficient is added, and the effect of boron is greatest at lower carbon contents. It is typically used with lower carbon steels. Boron has a very strong affinity for oxygen and nitrogen, with which it forms compounds. Boron can therefore only affect the hardenability of steels if it is in solution. This requires the addition of "gettering" elements such as aluminium and titanium to react preferentially with the oxygen and nitrogen in the steel. **Chromium, molybdenum, manganese, silicon, nickel, vanadium** The elements Cr, Mo, Mn, Si, Ni and V all retard the phase transformation from austenite to ferrite and pearlite. The most commonly used elements are Cr, Mo and Mn. The retardation is due to the need for redistribution of the alloying elements during the diffusional phase transformation from austenite to ferrite and pearlite. The solubility of the elements varies between the different phases, and the interface between the growing phase cannot move without diffusion of the slowly moving elements. There are quite complex interactions between the different elements, which also affect the temperatures of the phase transformation and the resultant microstructure. Steel compositions are sometimes described in terms of a *carbon equivalent* which describes the magnitude of the effect of all of the elements on hardenability. #### Grain size Increasing the austenite grain size increases the hardenability of steels. The nucleation of ferrite and pearlite occurs at heterogeneous nucleation sites such as the austenite grain boundaries. Increasing the austenite grain size therefore decreases the available nucleation sites, which retards the rate of the phase transformation. This method of increasing the hardenability is rarely used since substantial increases in hardenability require large austenite grain size, obtained through high austenitisation temperatures. The resultant microstructure is quite coarse, with reduced toughness and ductility. ![Graphs of hardness against distance for increasing austenitisation temperature](images/grain.gif) Effect of austenite grain size on hardenability The austenite grain size can be affected by other stages in the processing of steel, and therefore the hardenability of a steel also depends on the previous stages employed in its production. Example Jominy end quench test data = A plain carbon steel and an alloy steel were assessed using the Jominy end quench test. The hardness of the samples was measured as a function of the distance from the quenched end to demonstrate the different hardenability of the two steels. The data is shown as Vickers and Rockwell hardness. The alloy compositions are given in the table below. | | | | | | | | | | | - | - | - | - | - | - | - | - | - | | **(wt%)** | **C** | **Mn** | **Cr** | **Ni** | **Si** | **Mo** | **P** | **S** | | **Plain carbon Steel** | 0.3 | 0.7 | 0.1 | 0.14 | 0.26 | 0.03 | 0.003 | 0.02 | | **Alloy steel** | 0.3 | 0.6 | 0.7 | 3.5 | 0.26 | 0.35 | 0.01 | | ### Vickers Hardness The *Vickers* hardness test uses a square pyramidal diamond indentor. The recorded hardness depends on the indentation load and the width of the square indentation made by the diamond. The indentation load is typically between 10 and 30 kg. The hardness number is usually denoted by HV20 for **H**ardness **V**ickers **20** kg, for example. The Vickers test is most commonly used in the UK. The Rockwell hardness of a metal can also be determined using a similar technique. The variation of hardness was measured with distance from the quenched end. The results are plotted in the graph below. *Click on the circled data points to see how the microstructure varies with distance from the quenched end.* The alloy steel clearly has the highest hardenability, forming martensite to a greater depth than the plain carbon steel. Look at both the microstructures at high magnification, and try to observe the relationship between the volume fraction of martensite and the hardness of the steel. ![Graph of Vickers hardness against distance from quenched end](images/vickers.gif) ### Rockwell Hardness The Vickers hardness scale is not the only scale used to measure hardness in metals. The *Rockwell* hardness test measures a number which depends on the difference in the depth of an indentation made by two loads, a minor load followed by a major load. There are different scales for the Rockwell hardness test. For example, the commonly used Rockwell C test uses a minor load of 10 kg, followed by a major load of 150 kg. The number is denoted as HRC for **H**ardness **R**ockwell **C** scale. The indentor is either a conical diamond pyramid, or a hardened steel ball. The Rockwell test is commonly used in the USA. Other tests include the Brinell and Knoop hardness tests. There are conversion charts between the hardness scales. These can be found in standards, such as the British Standards, and reference works such as the ASM Metals handbook. It's important to use the correct conversion chart for different materials, since the hardness test causes plastic strain, and therefore varies with the strain hardening properties of the material. The graph below gives the Jominy end quench data in terms of the Rockwell hardness number. Clicking on the circled data points will take you to images of the microstructure at that location in the sample. ![Graph of Rockwell hardness against distance from quenched end](images/rockwell.gif) Heat flow simulation In this heat flow simulation you can adjust various parameters and observe the effect on the heat flow and cooling of the specimen. The simulation ignores the effect of heat loss from the sides of the specimen, i.e. it employs a one-dimensional model of heat flow through the specimen. The bar is divided into 25 equal length elements, and, at each time step of the simulation, for each element, a new temperature, resulting from heat transfer at either end, is calculated. The size of the time step is set to the maximum allowed while ensuring numerical stability of the simulation. Summary = The Jominy end quench test is the standard method for measuring the hardenability of steels. This describes the ability of the steel to be hardened in depth by quenching. The hardenability depends on the alloy composition of the steel, and can also be affected by prior processing, such as the austenitisation temperature. Knowledge of the hardenability of steels is necessary in order to select the appropriate combination of alloy and heat treatment for components of different size, to minimise thermal stresses and distortion. Questions = 1. Three low alloy steels, which differ only in their carbon content (0.1, 0.3 and 0.7 wt% carbon) are characterised using the Jominy end quench test. Select the plot of hardness variation along the test specimen that best describes their behaviour.![Plots of hardness against distance](images/plots1.gif) | | | | | - | - | - | | | a | Plot (a) | | | b | Plot (b) | | | c | Plot (c) | | | d | Plot (d) | 2. Two specimens of a low alloy steel with 0.3wt% carbon are characterised using the Jominy end quench test. One was austenitised at 950°C, and the other was austenitised at 1100°C. Select the plot of hardness variation along the test specimen that best describes their behaviour.![Plots of hardness against distance](images/plots2.gif) | | | | | - | - | - | | | a | Plot (a) | | | b | Plot (b) | | | c | Plot (c) | | | d | Plot (d) | 3. Three medium carbon steels (0.3wt%) that differ only in their Chromium content (0.25, 0.5 and 1 wt%) are characterised using the Jominy end quench test. Select the plot of hardness variation along the test specimen that best describes their behaviour.![Plots of hardness against distance](images/plots3.gif) | | | | | - | - | - | | | a | Plot (a) | | | b | Plot (b) | | | c | Plot (c) | | | d | Plot (d) | 4. You have three steels. Select the most appropriate steel to achieve the necessary levels of mechanical properties, residual stress and distortion in a 1mm diameter wood-working drill. | | | | | - | - | - | | | a | 1% C, 0.4% Si, 1% Mn, 5% Cr, 1% Mo | | | b | 0.4% C, 0.4% Mn, 0.3% Si | | | c | 0.5% C, 4% Cr, 6% Mo | 5. Again, you have three steels. Select the most appropriate steel to achieve the necessary levels of mechanical properties, residual stress and distortion in an injection moulding die for a mobile phone plastic case. | | | | | - | - | - | | | a | 1% C, 0.4% Si, 1% Mn, 5% Cr, 1% Mo | | | b | 0.4% C, 0.4% Mn, 0.3% Si | | | c | 0.5% C, 4% Cr, 6% Mo | 6. Again, you have three steels. Select the most appropriate steel to achieve the necessary levels of mechanical properties, residual stress and distortion in a tool for high speed milling of steel components. | | | | | - | - | - | | | a | 1% C, 0.4% Si, 1% Mn, 5% Cr, 1% Mo | | | b | 0.4% C, 0.4% Mn, 0.3% Si | | | c | 0.5% C, 4% Cr, 6% Mo | Going further = ### Reading * Honeycombe R W K and Bhadeshia H K D H, *Steels: Microstructure and Properties*, Edward Arnold, 1995. * Thelning, Karl-Erik, *Steel and its Heat Treatment*, Butterworths, 1975. * Llewellyn D T and Hudd R C, *Steels: Metallurgy and Applications*, 3rd Edition, Reed Educational and Professional Publishing, 1998. * ASM Handbook, Volume 4: Heat Treating, ASM International, 1991. ### Websites * An article by James Marrow published on the website, 9/7/2001. Much of the material is in common with the above website and this TLP. * Part of the website, this page has a Java applet simulation which allows you to perform a series of Jominy end quench tests on different grades of steel to see how composition affects hardenability. * An article by Daniel Herring also published on the website, 10/10/2001.
Aims On completion of this TLP you should: * Know the basic characteristics of liquid crystals, as well as the types of molecules capable of forming them. * Be aware of the different degrees of order that can be present in a liquid crystal. * Understand the various optical properties of liquid crystals, and how they are affected by factors such as molecular shape and defects within a sample. * Be aware of some of the technological applications of these materials. Before you start This TLP is mostly self-explanatory, however familiarity with the as well as the concept of refractive indices is recommended. Introduction Liquid crystals, as their name implies, are substances that exhibit properties of both liquids and crystals. Specifically, their molecules have the *high orientational order* found in crystalline solids as well as the *low positional order* found in liquids or amorphous glasses. Most liquid crystals are *thermotropic*; their degree of orientational and positional order depends on temperature and so their liquid crystalline phase occurs within a limited temperature range between the solid and liquid phase. ![Digram of phase changes liquid to liquid crystal to liquid](images/phasetranstemp0.png) Liquid crystal molecules are typically ‘rod shaped – long and thin with a rigid centre that allows them to maintain their shape. They also have flexible ends, which means that they can still flow past each other with ease. Molecules with this shape are known as *calamitic* liquid crystals. It is also possible to find liquid crystals made up of disc-shaped molecules; these are given the name *discotic* liquid crystals. The same rules apply here – a rigid centre is essential in order for the molecule to keep its shape and flexible edges allow ease of movement. Furthermore, polymeric liquid crystals as well as those whose behaviour depend on their concentration in solution (*lyotropic* liquid crystals) have also been discovered. For the purposes of this TLP we will be concentrating on *calamitic* (rod-shaped) liquid crystals only, however similar principles can be applied to all types of liquid crystal mentioned above. Order and disorder – molecular orientation Due to their distinctive shape calamitic liquid crystal molecules undergo stronger attractive forces when arranged parallel to one another. They therefore tend to align themselves pointing along one particular direction; this is a known as the *director* vector and is given the notation **n**. The angle between individual liquid crystal molecules and the director gives an indication of the *orientational order* of the system, which can be calculated using the following formula: ![Diagram of order parameter in liquid crystal](images/orderparam.png)\({\rm{order}}\;{\rm{parameter}}\;Q = {{(3\left\langle{{\cos }^2}\theta \right\rangle - 1)}} \;/\;{2}\) When Q = 1 the liquid crystal has complete orientational order; when Q = 0 it has no orientational order and has therefore become an isotropic liquid. For a thermotropic liquid crystal the variation of Q with temperature follows a trend similar to the one shown in the diagram below (exact values will vary): ![Graph of variation of Q with temperature](images/ordergraph.png) Order and disorder – molecular position = In the introduction we stated that whilst liquid crystals have high orientational order, their positional order is very low. However certain positional arrangements are possible. In general, calamitic liquid crystals can be divided into three different *mesophases*: | | | | - | - | | ***Nematic:*** Nematic liquid crystals have *no positional order* – they only have orientational order. | Diagram of nematic crystals | | ***Smectic:*** Smectic liquid crystals consist of molecules arranged into separate layers. However, there is no further positional order within the layers themselves. | Diagram of smectic crystals | | ***Chiral Nematic:*** In chiral nematic liquid crystals we see a helical structure, where the *director vector* is rotated slightly in each subsequent layer of molecules – the distance along the axis between two molecules with parallel director vectors is called the *pitch* of the liquid crystal. Their name derives from the fact that they are easily made by mixing a nematic with a chiral substance (which does not have to be a liquid crystal itself). Historically, they were also known as *cholesteric* liquid crystals as the first molecules found to display these properties were those related to cholesterol. | Diagram of choleristic crystals | As we will later see, the different degrees of positional ordering lead to very different optical properties. Defects = Just like regular crystal lattices, liquid crystals can contain defects these are given the name *disclinations*. Normally liquid crystals are most stable when all of the molecules are aligned to point along a single director. However, external factors can force the direction of the director vector to change abruptly somewhere within the sample (such factors include external electric/magnetic fields or even the rigid sides of the container itself). Where this occurs the local director is said to be undefined, and the region in question is the disclination. The stability of a disclination is dependent on the Frank Free Energy of the liquid crystal – however discussion of this particular topic is beyond the scope of this TLP. Some of the possible disclinations in a *nematic* liquid crystal are shown in the diagrams below (the dot indicates the location of the disclination itself whilst the lines represent the surrounding liquid crystal molecules and their orientations). Each type is assigned a number and a sign; the number indicates the strength of that particular disclination whilst the sign tells us which disclinations are capable of cancelling each other out should they come into contact (for example, the s = ½ and s = -½ disclinations could annihilate to produce a region with no defects). ![Diagram of disinclinations](images/disclination2.png) From the above diagrams we can therefore identify the disclination in the of this TLP as s = -1/2. In actuality disclinations are 3-dimensional phenomena; the following 3D models are of s = 1 and s = -1 disclinations where the liquid crystal is constrained within a particular environment:| | | | - | - | | Your browser does not support the video tag. View an *s = 1* point disclination (such as in a small spherical droplet) | Your browser does not support the video tag. View an *s = -1* point disclination (such as in a small spherical droplet) | | Your browser does not support the video tag. View an *s = 1* line disclination (such as in a capillary tube) | Your browser does not support the video tag. View an *s = -1* line disclination (such as in a capillary tube) |The concept of disclinations in liquid crystals is analogous to that of a dislocation in solid materials. The TLP covers this particular topic in more detail. Optical properties – birefringence in nematics One factor common to all liquid crystals is anisotropy; this in turn means that all liquid crystals will have a property known as *birefringence*. The result is that, for a given sample with a certain thickness and birefringence, when observing it through crossed polars we will see a colour made up of all the wavelengths of light that arent blocked by the analyser. Using tools such as the we can see which colours apply to which blocked wavelengths and thicknesses – this in turn tells us the birefringence of the particular liquid crystal. Looking at an image of a nematic liquid crystal we do not actually see colours – rather a bright white with several dark patches. ![Image of nematic liquid crystal](images/nematic.jpg) This is because the birefringence if a typical nematic at most temperatures is so great that we do not see much colour, but rather ‘high order white (seen to the right of the Michel-Levy Chart). The dark regions occur when the orientation of the director is completely parallel or perpendicular to one of the polarisers – in these regions the light passing through the sample only experiences one refractive index and so behaves as if it were passing through an isotropic liquid. This effect can be seen in the demonstration below. It is a ‘virtual optical microscope – by rotating the sample we can observe the regional variations in brightness as the different local director vectors move in and out of being in parallel with one of the polarisers (i.e. every 90° areas that were the lightest become the darkest and vice versa).   Note that the birefringence (n1 – n2) of a nematic liquid crystal is dependent on its temperature. As shown on the diagram below, it decreases with increasing temperature, meaning that the most colours will be seen when the sample is held close to T\*. ![Graph of refractive index with temperature](images/indexgraph.png)   Optical properties – birefringence in chiral nematics = Chiral nematic liquid crystals also exhibit birefringence – however due to their chirality the manner in which they split light into components is slightly different. When light is travelling along the helical axis of a chiral nematic it does not undergo regular (‘linear) birefringence. This is because as the director vector rotates the two components rotate along with it, and having travelled through one 360° pitch the components experience exactly the same overall refractive index. The result is that one component does not end up travelling faster than the other and so we see no optical path difference. However, in a chiral material light can become *circularly* polarised. In this case the light is split not into two perpendicular components, but instead into two components that are constantly rotating in opposite directions. The difference between linear polarisation (as in a nematic) and circular polarisation (as in a chiral nematic) is illustrated in the demonstration below. (The overall polarisation is indicated by the blue vector labelled S, whereas the red vectors indicate the components along the permitted vibration directions PVD1 and PVD2. The black colouring indicates when the vectors are moving in the opposite direction.) Optical properties – observing defects A further property of nematic liquid crystals when viewed using polarised light microscopy is the appearance of *schlieren brushes*; these are the distinctive dark cross shapes that appear throughout the image below. ![Image of schlieren brushes innematic liquid](images/schlieren.jpg) The centre of a cross is in fact a in the liquid crystal, the surrounding dark regions occurring where the orientation of the crystals is parallel to either the polariser or analyser. In order to work out which type of cross corresponds to which type of disclination we therefore need to think about the orientation of the local directors relative to a given set of crossed polars. This is shown for four different disclinations below: ![Diagram of disinclination in nematic liquid crystal s =1](images/schlieren1.png) ![Diagram of disinclination in nematic liquid crystal s = -1](images/schlieren2.png) ![Diagram of disinclination in nematic liquid crystal s = -1/2](images/schlieren3.png) ![Diagram of disinclination in nematic liquid crystal s=1/2](images/schlieren4.png) A further property of disclinations in nematic liquid crystals is that when one of the polarisers is rotated the schlieren brushes appear to rotate themselves; furthermore disclinations with opposite signs can be differentiated by the fact that their brushes appear to rotate in opposite directions. This is demonstrated in the video below: Your browser does not support the video tag. Video of the movement of schlieren brushes in a nematic liquid crystal Observing phase transitions = As mentioned in the introduction, the liquid crystalline phase usually occurs in a small temperature range between the solid and liquid phases. In the following section we are going to observe this phase transition using MBBA, ![Image of MBBA, a nematic liquid crystal](images/mbba.gif) which is a nematic liquid crystal between 21°C and 48°C. In each of the following experiments a microscope slide containing MBBA is heated until it becomes an isotropic liquid. It is then observed between crossed polarisers as it is allowed to cool down to room temperature. Experiment 1 uses regular MBBA on a regular glass slide; Experiment 2 uses regular MBBA on a slide with parallel scratches on its surface; Experiment 3 uses MBBA mixed with Canada balsam (a chiral glue) on a regular glass slide. ### Experiment 1: Isotropic Liquid to Nematic Liquid Crystal Your browser does not support the video tag. Video of the phase transformation (20x magnification, 3x speed)* Note that as the temperature decreases the coloured liquid crystalline phase begins to nucleate at various random points across the slide (remember that the colours seen during the transition occur due to ). * As described in previous section, darker regions are visible where the director is aligned with either the polariser or the analyser. * The video above is at too low a magnification to confidently identify any schlieren brushes. In the video sequence below, another sample is recorded forming at a higher magnification: View Part 1 of the phase transformation in another sample | | | | - | - | | Your browser does not support the video tag. Video of part 1 of the phase transformation in another sample | Your browser does not support the video tag. Video of part 2 of the phase transformation in another sample |* Finally, towards the end of the video we can see that the different regions of the liquid crystal are still drifting around, albeit slowly. This shows how fluid the MBBA is, despite having entered the liquid crystalline phase (high fluidity is another characteristic of nematics). ### Experiment 2: Isotropic Liquid to Nematic Liquid Crystal (On Grooved Surface) Your browser does not support the video tag. Video of the phase transformation (20x magnification, 4x speed)* The elongated liquid crystal molecules tend to orientate along the scratches – this is why we now see uniform extinctions rather than various ‘blotches of darkness. * Nucleation of the liquid crystalline phase is also guided by the scratches, with the liquid crystal sweeping in from the side rather than appearing at random points. ### Experiment 3: Isotropic Liquid to Chiral Nematic Liquid Crystal Your browser does not support the video tag. Video of the phase transformation (20x magnification, 3x speed)* Note how different the growth and final appearance of the liquid crystalline phase is, even when the only change to the sample is the addition of a chiral liquid. * Although nucleation begins in a similar fashion to the regular nematic, we can see the different regions merge with one another to form the final ‘fingerprint structure that is characteristic of chiral nematics with their helical axis parallel to the surface of the slide. * Circular birefringence only occurs when the light is travelling up the helical axis; therefore in this case the lines we are seeing are actually turns of the helix. Note that there also exist phase transitions between different degrees of ordering (e.g. a smectic à nematic phase transition). Whilst often thermally activated as well, these can also be induced by factors such as the application of an external electric field or the addition of a particular type of solvent. Commercial uses = A well-known technological use for liquid crystals is in liquid crystal displays (LCDs). The most common type in use today is the *twisted nematic* LCD which makes use of the *Freedericksz Transition* (a liquid crystal phase transition induced by the application of an electric field). A twisted nematic is often made from a nematic liquid crystal with a chiral dopant added to it. The following demonstration shows how a single pixel display is made and operated: Summary = * Liquid crystals are characterised by their *high orientational* and *low positional* molecular order. * Molecules capable of forming liquid crystals are always *anisotropic* – typically they will be *calamitic* (rod-shaped). * There are three types of calamitic liquid crystal: *nematic*, *smectic* and *chiral nematic*. They are defined by their differing degrees of positional order. * The degree of orientational order of a liquid crystal can be quantified using the \({\rm{order}}\;{\rm{parameter}}\;Q = {{(3\left\langle{{\cos }^2}\theta \right\rangle - 1)}} \;/\;{2}\) * Defects in liquid crystals are given the name *disclinations*. Each type of disclination is assigned a positive or negative number; the magnitude indicates its strength whilst the sign indicates which disclinations can cancel each other out. * Disclinations can be viewed directly by polarised light microscopy. For example, in a nematic they appear as *schlieren brushes*. * Liquid crystals also exhibit *birefringence* when viewedthrough crossed polars. * The most common modern commercial use of liquid crystals is in liquid crystal displays. Questions = ### Quick questions*You should be able to answer these questions without too much difficulty after studying this TLP. If not, then you should go through it again!*1. Which of the following molecules is likely to form a liquid crystalline phase? | | | | | - | - | - | | | a | | | | b | | | | c | | | | d | |### Deeper questions*The following questions require some thought and reaching the answer may require you to think beyond the contents of this TLP.*2. The bright colours found on some insect wings are due to the existence of a thin membrane containing a chiral nematic liquid crystal on their surfaces. Keeping in mind that the light will be reflected by their wings rather than transmitted through them, how do these colours occur? 3. In the introduction to this TLP lyotropic liquid crystals were mentioned. Unlike thermotropic species, their properties and mesophases are mainly affected by their concentration in solution, as well as other solutes & solvents present.Molecules that form lyotropic liquid crystals can usually be thought of as a long-chain molecule with a polar head attached to a non-polar hydrocarbon chain.![](images/lyotropic1.png)What kind of structures do you think these liquid crystals would form when mixed with water? How would this differ if they were mixed with a solvent such as hexane? 4. The disclinations shown below all result in a schlieren brush visible under polarised light microscopy. For each disclination, select the brush that would be seen [Yes for Brush A or No for Brush B] (assume the two polarisers are aligned vertically and horizontally)):| | | | - | - | | Brush A | Brush B | | schlieren | schlieren | | | | | | | - | - | - | - | | Yes | No | a | | | Yes | No | b | | | Yes | No | c | | | Yes | No | d | | Going further = #### Books * Peter J. Collings & Michael Hird, *Introduction to Liquid Crystals: Chemistry and Physics*, Taylor & Francis, 1997. * Peter J. Collings, *Liquid Crystals: Natures Delicate Phase of Matter*, 2nd Edition, Princeton University Press, 2002. #### Websites * This course paper on Liquid Crystals has lots of useful information, particularly once you pass the basics which are already covered in DoITPoMS. You may also find useful information on the . * The music in this video may be a little distracting and the speed a little too slow for your taste, but the video has captions and the sound is more bearable with the speed multiplier!* A DoITPoMS TLP describing the anisotropy found in liquid crystals and other materials in further detail.
Aims This TLP concerns mechanical testing in the form of uniaxial compressive or tensile loading, and also indentation procedures (both hardness measurement and more advanced techniques).  Tested materials are treated as isotropic continua.  The focus is on plastic deformation and its representation by constitutive laws.  No attempt is made to relate the plasticity characteristics to the micro-mechanisms responsible for them.  The main aim is to provide both practical advice about how to carry out these tests and insights into the information provided by them.  Before you start The following TLPs are relevant and could be consulted before you start: Introduction The mechanical properties of metals are of huge importance. Most industrial sectors - aerospace, automotive, construction, energy, mining, processing etc - rely heavily on a wide range of metallic components. Commonly, these operate, intermittently or continuously, under highly demanding conditions (of temperature, chemical environment, irradiation and, particularly, applied mechanical load). Efficient design often leads to components being used under conditions close to various limits for the metal concerned. A range of mechanical properties are relevant, but the most important are those that dictate the onset and progression of plastic deformation, and subsequent fracture. They depend in a highly complex manner on microstructure, such that they must always be measured experimentally. Furthermore, the microstructure, and hence the properties, can change during service. Extended periods under various combinations of stress, temperature, irradiation, corrosive environments etc can cause significant changes. Central to this scenario, and indeed to the whole gamut of metal processing and usage, is the way in which mechanical testing of metals is carried out. Various types of test have been developed, but the most widely used are those based on uniaxial (tensile or compressive) loading. These appear simple, but in detail they are not. Another type of test in extensive use, also straightforward in principle, but not in detail, is indentation testing. This type of test has long been used to obtain semi-quantitative "hardness numbers", but it can also be employed in a more sophisticated manner to infer stress-strain curves. This TLP covers all of these tests and also provides some basic background to them. Deviatoric (von Mises) and Hydrostatic Stresses and Strains = Plastic deformation of metals is stimulated solely by the deviatoric (shape-changing) component of the stress state, often termed the ***von Mises stress***, and is unaffected by the hydrostatic component.  This is consistent with the fact that plastic deformation (of metals) occurs at ***constant volume***.  It follows that the material response (stress-strain relationship) should be the same in tension and compression.  This is basically correct, although the difference between ***true*** and ***nominal*** stresses and strains should be noted (see next page), as should the possible effects of ***necking*** in tension and of ***friction*** (leading to ***barrelling***) in compression  -  see following pages. The von Mises stress is given by: \[{\sigma \_{{\rm{VM}}}} = \sqrt {\frac{{{{\left( {{\sigma \_1} - {\sigma \_2}} \right)}^2} + {{\left( {{\sigma \_2} - {\sigma \_3}} \right)}^2} + {{\left( {{\sigma \_3} - {\sigma \_1}} \right)}^2}}}{2}} \qquad \qquad \qquad (1) \] where \( \sigma \_1 \), \( \sigma \_2 \) and \( \sigma \_3 \) are the ***principal stresses*** (see the ).  It can thus be seen that the von Mises stress is a scalar quantity.  The hydrostatic stress can be written \[{\sigma \_{\rm{H}}} = \frac{{{\sigma \_1} + {\sigma \_2} + {\sigma \_3}}}{3} \qquad \qquad \qquad (2)\] This is also a scalar.  In the simulation below, the slider bars can be used to change the principal stresses.  The von Mises and hydrostatic stresses are then displayed.  Simulation 1: Von Mises and Hydrostatic Stresses Under simple uniaxial tension or compression, the von Mises stress is equal to the applied stress, while the hydrostatic stress is equal to one third of it.  The von Mises stress is always positive, while the hydrostatic stress can be positive or negative.  Its not appropriate to think of the von Mises stress as being “tensile”, as one would if it were a normal stress (with a positive sign).  Its effectively a type of (volume-averaged) shear stress.  Shear stresses do not really have a sign, but its conventional to treat them as positive, as indeed is done for the von Mises stress. Its also possible to identify deviatoric and hydrostatic components of the (plastic) strain state.  Analogous equations to those above are used to obtain these values.  The ***von Mises strain*** is often termed the “***equivalent plastic strain***”.  Again, it always has a positive sign, but this does not mean that it is a “tensile” strain.  The hydrostatic plastic strain, on the other hand, always has a value of zero.  This follows from the fact that plastic strain does not involve a change in volume.  (This is not true of elastic strains, which do in general involve a volume change.) True and Nominal Stresses and Strains = It is common during uniaxial (tensile or compressive) testing to equate the stress to the force divided by the original sectional area and the strain to the change in length (along the loading direction) divided by the original length.  In fact, these are “***engineering***” or “***nominal***” values.  The ***true stress*** acting on the material is the force divided by the current sectional area.  After a finite (plastic) strain, under tensile loading, this area is less than the original area, as a result of the lateral contraction needed to conserve volume, so that the true stress is greater than the nominal stress.  Conversely, under compressive loading, the true stress is less than the nominal stress. Consider a sample of initial length *L*0, with an initial sectional area *A*0.  For an applied force *F* and a current sectional area *A*, conserving volume, the true stress can be written \[{\sigma \_{\rm{T}}} = \frac{F}{A} = \frac{{FL}}{{{A\_0}{L\_0}}} = \frac{F}{{{A\_0}}}\left( {1 + {\varepsilon \_{\rm{N}}}} \right) = {\sigma \_{\rm{N}}}\left( {1 + {\varepsilon \_{\rm{N}}}} \right) \qquad \qquad \qquad (3)\] where \( \sigma \_{\rm{N}} \) is the nominal stress and \( \varepsilon \_{\rm{N}} \) is the nominal strain.  Similarly, the true strain can be written \[{\varepsilon \_{\rm{T}}} = \int\_{{L\_0}}^L {\frac{{{\rm{d}}L}}{L}} = \ln \left( {\frac{L}{{{L\_0}}}} \right) = \ln \left( {1 + {\varepsilon \_{\rm{N}}}} \right) \qquad \qquad \qquad (4) \] The true strain is therefore less than the nominal strain under tensile loading, but has a larger magnitude in compression.  While nominal stress and strain values are sometimes plotted for uniaxial loading, it is essential to use true stress and true strain values throughout when treating more general and complex loading situations.  Unless otherwise stated, the stresses and strains referred to in all of the following are true (von Mises) values. The simulation below refers to a material exhibiting ***linear work hardening*** behaviour, so that the (plasticity) stress-strain relationship may be written \[{\sigma \_{}} = {\sigma \_{\rm{Y}}} + K\varepsilon \qquad \qquad \qquad (5) \] where \( \sigma \_{\rm{Y}} \) is the yield stress and *K* is the work hardening coefficient.  The sliders on the left are first set to selected \( \sigma \_{\rm{Y}} \) and *K* values.  The applied force, *F*, is then progressively raised via the third slider.  The graph on the right then shows true stress-true strain plots, and nominal stress-nominal strain plots, while the schematic on the left shows the changing shape of the sample (viewed from one side). Note that the elastic strains are not shown on this plot, so nothing happens until the applied stress reaches the yield stress. Since a typical Young's modulus of a metal is of the order of 100 GPa, and a typical yield stress of the order of 100 MPa, the elastic strain at yielding is of the order of 0.001 (0.1%). Neglecting this has only a small effect on the appearance of most stress-strain curves.Simulation 2: Nominal and True Stresses and Strains Overview of Plasticity and its Representation with Constitutive Laws Plastic deformation of metals most commonly occurs as a result of the ***glide of dislocations***, driven by shear stresses.  (In some cases, ***deformation twinning*** may contribute, but this also requires shear stresses in a similar way, and also involves no volume change.)  In a polycrystal (ie in most metallic samples), individual grains must deform in a cooperative way, so that each undergoes a relatively complex shape change (requiring the operation of ***multiple slip systems***), consistent with those of its neighbours.  The (deviatoric) stress needed to initiate global plasticity in a sample is termed the ***yield stress***. In general, continuation of plastic deformation requires a progressively increasing level of applied stress. This effect is termed “***work hardening***” or “***strain hardening***”. It arises because, as more dislocations are created, and as they interact with each other (creating jogs and tangles), they tend to become less mobile, see . The yield stress, and the work hardening characteristics, exhibit a complex dependence on ***crystal structure***, ***grain size***, ***crystallographic texture***, ***composition***, ***phase constitution***, ***grain boundary structure***, ***prior dislocation density***, ***impurity levels*** etc.  Even for a given material, these plasticity characteristics can be dramatically changed by ***thermal*** or ***mechanical*** ***treatments*** or by exposure to various ***environments*** (chemical, irradiative etc). Accurate prediction of key mechanical properties of metallic alloys is virtually impossible, even if the microstructure has been carefully and comprehensively characterised.  Such properties must therefore be measured experimentally.  Since they are of great importance for many (industrial) purposes, the measurement techniques need to be fully understood. The yield stress is usually taken to have a single value, but work hardening needs more complex definition.  This must be valid over an appreciable range of plastic strain  -  perhaps 50% or more in some cases.  Even metals that are relatively hard (and brittle) are normally required to have ***ductility*** levels (plastic strains to failure) of at least several % if they are to be used for engineering purposes. Of course, there is no expectation that the work hardening curve will conform to any particular functional form.  However, in general, the ***work hardening rate*** (gradient of the true stress / true strain plot) tends to decrease progressively with increasing strain, perhaps eventually approaching a plateau.  This is a consequence of competition between the creation of new dislocations, and inhibition of their mobility (by forming tangles etc), and processes (such as climb and cross-slip) that will allow them to become more organised and to annihilate each other.  Initially, the former group of processes tends to dominate, but a balance may eventually be reached, so that the “***flow stress***” ceases to rise.  (With metals, it is very rare, except with single crystals, for the work hardening rate to rise with increasing strain, but this is quite common in certain types of polymer, as a consequence of molecular reorganisation  -  see the ). Several analytical expressions have been proposed to characterise the work hardening of metals, but only two are in frequent use.  The first is the ***Ludwik-Hollomon*** equation \[{\sigma \_{}} = {\sigma \_{\rm{Y}}} + K{\varepsilon ^n} \qquad \qquad \qquad (6)\] where \( \sigma \) is the (von Mises) applied stress, \(\sigma \_{\rm{Y}} \) is its value at yield, \( \varepsilon \) is the plastic (von Mises) strain, K is the ***work hardening coefficient*** and *n* is the ***work hardening exponent***.  The second is the ***Voce*** equation \[\sigma = {\sigma \_{\rm{s}}} - ({\sigma \_{\rm{s}}} - {\sigma \_{\rm{Y}}})\exp \left( {\frac{{ - \varepsilon }}{{{\varepsilon \_0}}}} \right) \qquad \qquad \qquad (7) \] The stress \( \sigma \)s is a saturation level, while \( \varepsilon \)0 is a characteristic strain for the exponential approach of the stress towards this level.  The simulation below can be used to explore their shapes.  In practice, the L-H is more common.  The L-H does allow ***linear work hardening*** (*n* = 1), which is sometimes observed, whereas Voce does not.Simulation 3: Ludwik-Hollomon and Voce constitutive laws Tensile Testing - Practical Basics The ***uniaxial tensile test*** is the most commonly-used mechanical testing procedure.  However, while it is simple in principle, there are several practical challenges, as well as a number of points to be noted when examining outcomes.  Specimen Shape and Gripping - A central issue concerns the specimen shape.  The behaviour is monitored in a central section (the “***gauge length***”), in which a uniform stress is created.  The grips lie outside of this section, where the sample has a larger sectional area, so that stresses are lower.  If this is not done, then stress concentration effects near the grips are likely to result in premature deformation and failure in that area.  Several different geometries are possible. A typical sample and gripping system are shown in the photo below. ![Tensile test sample set up](images/Tensile_testing_sample_setup_s.jpg) Photo 1: Static image showing a typical tensile specimen and set of grips Measurement of Load and Displacement All testing systems have some sort of “***loading train***”, of which the sample forms a part.  This “train” can be relatively complex  -  for example, it might involve a rotating worm drive (screw thread) somewhere, with the force transmitted to a ***cross-head*** and thence via a gripping system to the sample and then to a base-plate of some sort.  It does, of course, need to be arranged that, apart from the sample, all of the components loaded in this way experience only elastic deformation.  The same force (load) is being transmitted along the complete length of the loading train.  Measurement of this load is thus fairly straightforward.  For example, a ***load cell*** can be located anywhere in the train, possibly just above the gripping system.  In some simple systems, such as hardness testers or creep rigs, a ***fixed load*** may be generated by a ***dead weight***. Measurement of the displacement (in the gauge length) is more of a challenge.  Sometimes, a measuring device is built into the set-up  -  for example, it could measure the amount of rotation of a worm drive.  In such cases, however, measured displacements include a contribution (elastic) from various elements of the loading train, and this could be quite significant.  It may therefore be important to apply a ***compliance calibration***.  This involves subtracting from the measured displacement the contribution due to the compliance (inverse of stiffness) of the loading train.  This can be measured using a sample of known stiffness (ensuring that it remains elastic). Displacement Measuring Devices Several types of device can be used to measure displacement, including Linear Variable Displacement Transducers (***LVDT***s), ***eddy current*** gauges and ***scanning laser extensometers***.  These have resolutions of the order of 1 µm.  More specialised (and accurate) devices include ***parallel plate capacitors*** and ***interferometric optical*** set-ups, although they often have more limited measurement ranges. Alternatively, displacement can be measured directly on the gauge length, eliminating concerns about the system compliance. Devices of this type include clip-gauges (knife edges pushing lightly into the sample) and strain gauges (stuck on the sample with adhesive). The latter have good accuracy (±0.1% of the reading), but are limited in range (~1-2% strain). They are useful for measurement of the sample stiffness (Young's modulus), but not for plastic deformation. A versatile technique, useful for mapping strains over a surface, is Digital Image Correlation (DIC), in which the motion of features ("speckles") in optical images is followed automatically during deformation, with displacement resolutions typically of the order of a few μm. In the video below, which also shows the development of the stress-strain curve, clip gauges are being used to measure the displacement, and hence the strain. It can be seen that, when the strain reaches a certain level (~25% in this case), the specimen starts to"neck". This is apparent in the video and it can also be seen that the onset of necking coincides, at least approximately, with a plateau (peak) in the nominal stress – nominal strain curve. This important phenomenon is examined in more detail on the next page.  Video 1: Tensile testing of annealed Cu sample (video and evolving nominal stress-strain plot) Tensile Testing - Necking and Failure = With a brittle material, tensile testing may give an approximately linear stress-strain plot, followed by fracture (at a stress that may be affected by the presence and size of flaws).  However, most metals do not behave in this way and are likely to experience considerable plastic deformation before they fail.  Initially, this is likely to be uniform throughout the gauge length. Eventually, of course, it will fail (fracture).  However, in most cases, failure will be preceded by at least some ***necking***.  The formation of a neck is a type of instability, the formation of which is closely tied in with work hardening. It is clear that, once a neck starts to form, the (true) stress there will be higher than elsewhere, possibly leading to more straining there, further reducing the local sectional area and accelerating the effect. In the complete absence of work hardening, the sample will be very susceptible to this effect and will be prone to necking from an early stage.  (This will be even more likely in a real component under load, where the stress field is likely to be inhomogeneous from the start.)  Work hardening, however, acts to suppress necking, since any local region experiencing higher strain will move up the stress-strain curve and require a higher local stress in order for straining to continue there.  Generally, this is sufficient to ensure uniform straining and suppress early necking.  However, since the work hardening rate often falls off with increasing strain (see earlier page), this balance is likely to shift and may eventually render the sample vulnerable to necking. Furthermore, some materials (with high yield stress and low work hardening rate) may indeed be susceptible to necking from the very start. Considères Construction This situation was analysed originally by Armand Considère (1885), in the context of the stability of structures such as bridges.  Instability (onset of necking) is expected to occur when an increase in the (local) strain produces no net increase in the load, *F*.  This will happen when \[{\rm{\Delta }}F = 0 \qquad \qquad \qquad (8) \] This leads to \[F = \sigma A,\qquad \qquad ∴ {\rm{d}}F = A{\rm{d}}\sigma + \sigma {\rm{d}}A = 0\] \[∴ \frac{{{\rm{d}}\sigma }}{\sigma } = \frac{{ - {\rm{d}}A}}{A} = \frac{{{\rm{d}}L}}{L} = {\rm{d}}\varepsilon \] \[∴ \sigma = \frac{{{\rm{d}}\sigma }}{{{\rm{d}}\varepsilon }} \qquad \qquad \qquad (9) \] Necking is thus predicted to start when the slope of the true stress / true strain curve falls to a value equal to the true stress at that point.  This construction can be explored using the simulation below, in which the true stress – true strain curve is represented by the L-H equation.  Simulation 4: Considère's construction (basic construction) This condition is commonly expressed in terms of the nominal strain. \[∴ \frac{{{\rm{d}}\sigma }}{{{\rm{d}}\varepsilon }} = \frac{{{\rm{d}}\sigma }}{{{\rm{d}}{\varepsilon \_{\rm{N}}}}}\frac{{{\rm{d}}{\varepsilon \_{\rm{N}}}}}{{{\rm{d}}\varepsilon }} = \frac{{{\rm{d}}\sigma }}{{{\rm{d}}{\varepsilon \_{\rm{N}}}}}\left( {\frac{{{\rm{d}}L/{L\_0}}}{{{\rm{d}}L/L}}} \right) = \frac{{{\rm{d}}\sigma }}{{{\rm{d}}{\varepsilon \_{\rm{N}}}}}\left( {\frac{L}{{{L\_0}}}} \right) = \frac{{{\rm{d}}\sigma }}{{{\rm{d}}{\varepsilon \_{\rm{N}}}}}\left( {1 + {\varepsilon \_{\rm{N}}}} \right)\] \[∴ \sigma = \frac{{{\rm{d}}\sigma }}{{{\rm{d}}{\varepsilon \_{\rm{N}}}}}\left( {1 + {\varepsilon \_{\rm{N}}}} \right)\qquad \qquad \qquad (10) \] The condition can therefore also be formulated in terms of a plot of true stress against nominal strain.  On such a plot, necking will start where a line from the point \(\varepsilon \_{\rm{N}} \) = -1 forms a tangent to the curve.  This construction can be explored using the simulation below.  Simulation 5: Considère's construction, based on a true stress-nominal strain plot If the true stress – true strain relationship does conform in this way to the L-H equation, it follows that the necking criterion (Eqn.(9)) can be expressed as \[{\sigma \_{\rm{Y}}} + {K}{\varepsilon ^n} = n{K}{\varepsilon ^{n - 1}} \qquad \qquad \qquad (11) \] which can be solved analytically. Ultimate Tensile Stress (UTS) and Ductility - It may be noted at this point that it is common during tensile testing to identify a “strength”, in the form of an “ultimate tensile stress” (***UTS***).  This is usually taken to be the peak on the nominal stress v. nominal strain plot, which corresponds to the onset of necking.  It should be understood that this value is not actually the true stress acting at failure.  This is difficult to obtain in a simple way, since, once necking has started, the (changing) sectional area is unknown - although the behaviour can often be captured quite accurately via FEM modelling – see below. Also, the "ductility", often taken to be the nominal strain at failure (usually well beyond the strain at the onset of necking) does not correspond to the true strain in the neck when fracture occurs. UTS and ductility values therefore provide only rather loose indications of the strength and toughness of the material. Nevertheless, they are quite widely quoted. FEM Simulation of Tensile Tests It is sometimes stated that the initiation of necking during tensile testing arises from (small) variations in sectional area along the gauge length of the sample. However, in practice, for a particular material, its onset does not depend on whether great care has been taken to avoid any such fluctuations. Furthermore, the introduction of such defects in an FEM model does not, in general, significantly affect the predicted onset. The (modelling) condition that does lead to necking is the assumption that, near the end of the gauge length, the sample is constrained from contracting laterally. In practice, due to the increasing sectional area in that region, and because the material beyond the gauge length will undergo little or no deformation, that condition is often a fairly realistic one. In fact, for any true stress – true strain relationship, including an experimental one that cannot be expressed as an equation, FEM simulation can be used to predict the onset of necking. The simulation below shows, for two materials (with low and high work hardening rates), how the behaviour can be accurately captured and the stress and strain fields explored at any point during the test.(These two materials will also be used to illustrate some effects concerned with compression and indentation testing.) The L-H law is being used here, with best fit values for the 3 parameters in each case.   Simulation 6: Tensile test FEM simulation data, for two materials, together with the corresponding videos and experimental stress-strain curves. Compression Testing - Practical Basics, Friction & Barrelling = Uniaxial testing in compression is in many ways simpler and easier than in tension.  There are no concerns about gripping and no possibility of necking or other localised plasticity.  The sample is usually a simple cylinder or cuboid.  However, there are some potential difficulties.  One of these is the danger of (plastic) **buckling**, particularly if relatively large strains (>~10%) are to be created.  In order to avoid this, the aspect ratio (height / diameter) must be kept relatively low  -  probably not much more than unity.  Since a very large sectional area might lead to excessive load requirements, this often means that the height (gauge length) of the sample is limited.  This in turn leads to relatively low displacements, placing a premium on measurement accuracy (with the points made about compliance calibration, when referring to tensile loading, applying equally here). Effect of Friction between Sample and Platen There are also concerns about the effect of friction. This is potentially important, since one outcome of friction is that the stress and strain fields become non-uniform - see the simulation below, so that the nominal stress-strain curve cannot be converted to a true version via use of the analytical equations (even if the value of the coefficient of friction, μ, is known). In practice, it is common to apply a lubricant to the contact surfaces of the sample and to assume that any effect of friction will be small.  However, the high contact pressure tends to force lubricant out of the region between platen and sample, so this assumption may not be valid. Two extreme cases can be identified.  The first, which is commonly assumed, is that there is **unhindered sliding** at the interface (μ = 0).  The sectional area will remain uniform along the sample length during deformation (no barrelling) and there is no frictional work.  The complementary limiting case is that of **no sliding**.  This also involves no frictional work, but **barrelling** occurs from the start of the test.  While the exact shape of the “barrel”, as a function of the applied load, will depend on the aspect ratio, it is clear that significant barrelling will **invalidate** the test  -  the stress now varies along the length of the sample and the relationship between the true stress-strain curve and the outcome will be a complex one. FEM Simulation In practice, there is likely to be at least some frictional sliding (with (μ > 0, so that energy is dissipated), but also some barrelling.  The sliding is likely to occur over only part of the surface, since the interfacial shear stress rises with increasing distance from the loading axis (where it is zero).  The outcomes of this can be explored using the simulation below, which is based on FEM modelling.  Simulation 7: FEM simulation of a compression test Indentation Hardness Measurement Currently, most mechanical testing aimed at obtaining quantitative plasticity characteristics is carried out by uniaxial loading in tension or, to a lesser extent, in compression.  Tensile testing also provides a measure of the “failure strength” (in the form of an “ultimate tensile stress”) and the ductility.  However, these tests require relatively large (uniform) pieces of material, extensive machining to produce samples (at least for tensile testing), considerable care in the way that they are carried out and sound background knowledge in interpretation of the outcome (load-displacement data).  These represent substantial limitations and challenges. A completely different approach, circumventing virtually all of these difficulties, is that of **indentation**.  This can be carried out on relatively **small samples** of simple shape  -  just a flat surface is required, can provide **point-to-point mapping** of plasticity characteristics over a surface and can be used for **non-destructive** field testing of components in situ.  It involves pushing a hard indenter into the surface with a known force and measuring the diameter of the resultant indent.  From this measurement, a “**hardness number**” is obtained. The indenter must remain elastic  -  ie must experience no plastic deformation -  and is usually made of **diamond** or another **very hard material**.  There are several different types of hardness test, including those of **Vickers**, **Brinell**, **Knoop** and **Rockwell**.  They were all developed several decades ago and they essentially only differ in the shape (and in some cases the material) of the indenter.  A typical hardness test is that of Vickers, the geometry of which is shown below.  The (diamond) indenter is a right pyramid with a square base and an angle of 136˚ between opposite faces.  ![VIckers hardness test geometry](images/vickers_hardness.jpg) Geometry of the Vickers Hardness Test The hardness number is the ratio of the **applied force**, in kgf, to the **contact area** (NOT the projected area), in mm2.  The measured indent diameter, D, taken as the average of D1 and D2, IS, however, measured **in projection**.  The value of HV is therefore given by \[{H\_{\rm{v}}} = \frac{{2F\sin \left( {\frac{{136}}{2}} \right)}}{{{D^2}}} = 1.854\frac{F}{{{D^2}}} \qquad \qquad \qquad \qquad (12) \] so a simple calculation allows the hardness number to be obtained from the measured value of D. There are, of course, certain assumptions incorporated into this analysis.  For example, elastic recovery of the specimen is neglected.  Furthermore, in practice the specimen may exhibit “**pile-up**” or “**sink-in**” around the indent, such that the true area of contact differs from that obtained via the simple geometrical picture above. The stress acting on the contact area (in MPa) is obtained on multiplying the hardness number by 9.81.  However, to say the least, there is no simple relationship between this stress and the stress field generated in the material beneath the indenter.  The latter is complex and also depends in a complex way on the shape of the indenter and the plasticity characteristics of the material.  In fact, even for a given type of hardness, the number obtained tends to vary with the applied load.  This occurs because the load affects the level of plastic strain, which in turn dictates which portions of the stress-strain curve are affecting the outcome. Nevertheless, it is in practice quite common to use simple expressions so as to obtain a yield stress value from a hardness number.  This is based on the fact that, in the absence of work hardening, and for a given indenter shape, there is an approximately linear relationship between the yield stress (which will be the von Mises stress throughout the volume that has plastically deformed) and the hardness number.  For the Vickers test \[{\sigma \_{\rm{\gamma }}} \approx \frac{{{H\_{\rm{v}}}}}{3} \qquad \qquad \qquad \qquad (13) \] Since very few materials exhibit no work hardening at all, a yield stress obtained in this way is not reliable.  In fact, it is likely to be indicative of a flow stress averaged in some way over the range of plastic strain experienced during the test, which depends on the depth of penetration, the indenter shape and the work hardening characteristics themselves.  It should never be regarded as more than a semi-quantitative indication of the resistance that the material offers to plastic deformation. Indentation Plastometry = While hardness testing gives only a semi-quantitative indication of the resistance to plastic deformation, the outcome of an indentation operation (ie the size and shape of the residual indent) does depend in a sensitive way on the (true) stress-strain curve of the material, potentially over a large range of plastic strain. Unfortunately, extracting only a single measurement of the indent diameter (while simple experimentally) exploits only a minute proportion of the information incorporated into this residual profile. Measurement of the full profile now forms the basis of a new methodology for obtaining complete (true) stress-strain curves from indentation experiments. This is termed indentation plastometry. Some information about its practical usage is provided in the "" section. Before explaining how the procedure works, it should perhaps be emphasized that, even with conventional (uniaxial) tension or compression testing, it is not necessarily straightforward to obtain the correct true stress - true strain relationship. Of course, it is a simple matter to convert nominal stress and nominal strain to true values. However, this is based on the assumption that the sample is deforming uniformly (within the gauge length) throughout the test. In tension, depending on work hardening characteristics, some necking could be taking place - perhaps from a very early stage and quite possibly without it being at all apparent by simply looking at the sample during the test. This will invalidate the standard conversion of nominal stresses and strains to true values. Similarly, in compression testing there is likely to be at least some frictional resistance to interfacial sliding, and hence a degree of barrelling. Again, this invalidates the standard procedure for obtaining the true stress – true strain relationship (although the effect may be relatively small). For both types of test, as could be inferred from the previous pages, there is a procedure for obtaining the correct stress-strain curve (in the form of the values of the parameters in a constitutive law such as the L-H expression). It involves iterative FEM simulation of the test, evaluating each time a “goodness-of-fit” parameter between the experimental outcome (nominal stress – nominal strain relationship) and that predicted by the model. A search is then made in parameter space, repeatedly simulating the process, until convergence is obtained on the best-fit solution (set of plasticity parameter values). In the case of compression, the value of the coefficient of friction will be part of this parameter set, although its likely to have a similar value for a wide range of materials. In practice, such procedures are rarely carried out for uniaxial testing. Both necking and friction are often simply ignored. In fact, sometimes only nominal stress – nominal strain curves are obtained (although they certainly dont fully capture the plasticity characteristics and they cant be used in simulation of more complex multi-axial loading situations). However, there is now a growing awareness that the above methodology can be applied to any loading configuration and, in particular, to indentation (most commonly with a spherical indenter). The experimental outcome can be the load-displacement plot, although it is often more convenient and accurate to use the residual indent profile. Software packages are available for implementing this procedure automatically, with an indication provided about the reliability of the result (fidelity of capturing the actual stress-strain relationship using the constitutive law concerned, with the optimized set of parameter values). The procedure is illustrated below. FEM Simulation Iterative FEM simulation of the indentation process is central to the procedure. Starting with some trial (L-H) plasticity parameter values, the simulation is run (to a prescribed load or penetration depth). A comparison is then made between predicted and measured outcomes – either the load-displacement plot or the residual indent profile can be used. This comparison is characterised by the value of a misfit parameter. For the one used in the simulation below, a value below about 10-3 represents good agreement. A convergence algorithm is used to sequentially select parameter value sets that give improved agreement, until a stable (best fit) combination is obtained. The simulation demonstrates, for the two materials being studied, how this set (and hence the inferred stress-strain curve) is obtained. Outcomes are shown for the initial iteration and then for a few others as the convergence is achieved. It may be noted that a constant value for μ of 0.3 was used in these simulations: experience has shown that this is at least approximately correct for indentation (which is normally unlubricated). If compression testing is carried out with lubrication, then a slightly lower value (~0.15) is likely to be appropriate.  Simulation 8: Iterative FEM simulation during indentation plastometry Comparison between Indentation and Uniaxial Outcomes - There are several ways in which outcomes from these three types of test can be compared. A tensile loading plot of nominal stress against nominal strain is a common one. This allows the UTS (ultimate tensile stress) to be evaluated, since it corresponds (for a metal with at least some ductility) to the peak, where necking is expected to start. Such plots can be obtained by simulation of the tensile test, using sets of L-H parameters obtained by iterative FEM modelling. However, the most convenient way to compare the outcomes is by simply plotting the L-H curves (true stress v. true plastic strain). This is done below. It can be seen that the level of consistency between the 3 methods is good. It is, however, important to understand that it will never be perfect, since an actual (true) stress-strain curve will never conform perfectly to the L-H law (or to any other analytical equation). The most important point here is that, to good accuracy, a plot of this type (and hence a nominal stress v. nominal strain tensile plot) can be extracted solely from indentation data, which can be obtained in a non-destructive way from small samples of simple shape and also from components in use. Such curves are direction-averaged, which should be borne in mind if the material is strongly anisotropic. ![](images/true_stress_strain.jpg) Image 2: Plots, for both materials, of true stress against true (plastic) strain. The tension plots are simply the experimental data, converted to true values, up to the strain level at which necking started. The other two are Ludwik-Hollomon plots, corresponding to the best fit sets of L-H parameter values (shown), obtained via iterative FEM simulation of the compression or indentation tests. Summary = This TLP has covered the basics of how mechanical testing of metals is carried out, aimed at study of the onset and development of plastic deformation and how it affects the "strength" (failure stress) and ductility. It is based on continuum mechanics - ie there is no detailed consideration of the mechanisms of plasticity or of effects such as anisotropy and inhomogeneity that could arise from microstructural features. The plasticity, and also the consequential necking and fracture that are likely to occur under tensile loading, are characterised by constitutive laws. Two commonly-used expressions are described here. Their usage in FEM modelling is outlined, aimed at obtaining detailed information about the stress and strain fields that are generated during different types of test. Questions = ### Quick questions*You should be able to answer these questions without too much difficulty after studying this TLP. If not, then you should go through it again!*1. Which of these statements about deviatoric and hydrostatic stresses and strains is correct, in the context of plastic deformation? | | | | | - | - | - | | | a | The deviatoric (von Mises) stress can be positive (tensile) or negative (compressive). | | | b | The equivalent plastic strain (von Mises strain) is always zero. | | | c | The deviatoric (von Mises) stress is a second rank tensor. | | | d | The hydrostatic component of the strain state is always zero. | 2. Which of these statements about true and nominal stresses and strains is correct? | | | | | - | - | - | | | a | Conversion of nominal stresses and strains to true values, using analytical equations, is only correct if they are uniform throughout the sample. | | | b | It is acceptable to use nominal stress v. nominal strain relationships obtained from uniaxial tests when predicting the behaviour under multi-axial loading. | | | c | The nominal strain is always greater than the true strain. | | | d | The true stress is always greater than the nominal stress. | 3. Which of these statements about using constitutive “laws” to describe plastic deformation is correct? | | | | | - | - | - | | | a | Constitutive laws are implemented taking account of both the deviatoric and the hydrostatic components of the stress state. | | | b | There is no reason to expect the actual plastic deformation behaviour to conform accurately to any constitutive law. | | | c | Constitutive laws for metal plasticity must predict a continuously decreasing work hardening rate as the strain increases. | | | d | Constitutive laws are based on the assumption that plasticity depends on the peak shear stress generated within the sample. | 4. Which of these statements about tensile testing of metals is correct? | | | | | - | - | - | | | a | Necking arises in a location where the sample initially has a slightly smaller sectional area. | | | b | Necking arises when the true stress in the location concerned starts to exceed the yield stress. | | | c | Necking of the sample is predicted to start when the gradient of the plot of nominal stress against nominal strain starts to become negative. | | | d | Necking of the sample is predicted to start when the gradient of the plot of true stress against true strain starts to become negative. | 5. Which of these statements about compressive testing of metals is correct? | | | | | - | - | - | | | a | It is always acceptable to assume that friction between sample and platen has a negligible effect on the outcome, provided that surface is lubricated. | | | b | The complete absence of barreling implies that friction between sample and platen is negligible. | | | c | If there is no sliding between sample and platen, then it is safe to assume that the stress and strain are uniform throughout the test. | | | d | The nominal stress v. nominal strain plot can be converted to a true stress v. true strain plot, using analytical equations, provided there is no interfacial sliding. | 6. Which of these statements about hardness testing is correct? | | | | | - | - | - | | | a | A hardness number can be used to obtain an accurate value for the yield stress. | | | b | Hardness is defined, for any given applied force, as that force divided by the contact area. | | | c | During hardness testing, the deviatoric stress throughout the plastic zone under the indenter is equal to the yield stress. | | | d | For a given type of hardness test, the value obtained when testing a given sample is independent of the applied force. | 7. Which of these statements about indentation plastometry is correct? | | | | | - | - | - | | | a | The technique involves repeated numerical simulation of the indentation process, using a constitutive law to represent the true stress v. true strain relationship that applies between the von Mises stress and the equivalent plastic strain. | | | b | The technique involves repeated numerical simulation of the indentation process, using a constitutive law to represent the nominal stress v. nominal strain relationship that applies between the von Mises stress and the equivalent plastic strain. | | | c | The technique involves repeated numerical simulation of the indentation process, using a constitutive law to represent the true stress v. true strain relationship that applies between the hydrostatic stress and the hydrostatic strain. | | | d | The deduced true stress v. true strain relationship will be identical to the curve obtained experimentally during tensile testing, after conversion of nominal stresses and strains to true values, using analytical equations. | Going further =Perhaps unsurprisingly, there are relatively few sources that provide background information regarding something as basic as uniaxial mechanical testing, although there are, of course, plenty of books and websites that cover the fundamentals of stress analysis etc. Among relatively recent books in this area, with an accent on FEM, are the following: “*Practical Stress Analysis with Finite Elements*”, Bryan J. MacDonald, Glasnevin publishing (2007), ISBN:978-0-9555781-0-6. “*Structural and Stress Analysis: Theories, Tutorials and Examples*”, Jianqiao Ye, Taylor & Francis (2008), ISBN:0-203-02900-3. Regarding Indentation Plastometry, which is a very recent development, there are as yet no published books and indeed the software necessary to implement the technology is not yet widely available in user-friendly, commercially mature form. However, there are websites that describe the methodology, where such access is likely to become available in due course. Notable among these is .
Aims On completion of this TLP you should: * Recognise the stress and strain tensors and the components into which they can be separated. * Know how to diagonalise a stress tensor for plane stress, and recognise what a principal stress tensor is and why principal stress tensors are useful. * Understand what a yield criterion is and how it can be used. * Have an appreciation of different yield criteria and the materials for which they are appropriate. Before you start * You should understand the concept of slip and the different ways in which materials (and in particular metals) undergo slip. The teaching and learning package covers the fundamentals. * In polycrystalline materials, the distribution of grain orientations and the constraint to deformation offered by neighbouring grains gives rise to a simplified overall stress-strain curve in comparison to the curve from a single crystal sample. Crystal structure is also important in polycrystalline samples - the von Mises criterion states that a minimum of five independent slip systems must exist for general yielding. Introduction Metal forming involves a permanent change in the shape of a material as a result of the application of an applied stress. The work done in deforming the sample is not recoverable. This plastic deformation involves a change in shape without a change in volume and without melting. It is desirable to know the stress level at which plastic deformation begins i.e., the onset of yielding. In *uniaxial* loading, this is the point where the straight, elastic portion of the line first begins to curve. This point is the yield stress. The animation below shows a typical stress-strain curve for a polycrystalline sample, obtained from uniaxial tensile test.    Single crystal vs polycrystalline = The theory of slip in single crystals is well established. When an item is made from metal, however, a single crystal is not generally used. A piece of metal used to make a bicycle or a handrail is made of many small crystals or grains. This affects the behaviour of the metal in many ways: * The grains are not aligned: for example, the [001] axis of one grain might be pointing in a different direction to the [001] axis of its neighbour. This means that different grains slip by different strains when a stress is applied to the whole material, and offer different amounts of resistance to the force. These all contribute to the way that the whole block deforms under stress. * The grain boundaries formed where the grains meet have distinct properties from the rest of the material. When the two crystals either side of a grain boundary have different orientations, defects such as dislocations cannot pass simply through the boundary. Effects like these also have an affect on the response of a metal to stresses. For these reasons, it is almost impossible to predict in detail from atomic scale theory how a block of metal will deform plastically when a suitable force is applied to it. We must instead find out what happens from experimental observations and then develop a macroscopic engineering model to describe and predict the behaviour of the polycrystalline sample. Representing stress as a tensor = <! .style4 {color: #0000FF} > <! function MM\_openBrWindow(theURL,winName,features) { //v2.0 window.open(theURL,winName,features); } //> To understand this page, you first need to understand tensors! Good sources are the books by J.F. Nye , G.E. Dieter , and D.R. Lovett referred to in the section in this TLP. Many undergraduate university courses in physical science or engineering have a series of lectures on tensors, such as the course at Cambridge University Department of Materials Science and Metallurgy, the handout for which can be found . The stress tensor is a **field tensor** – it depends on factors external to the material. In order for a stress not to move the material, the stress tensor must be symmetric: σ*ij* = σ*ji* – it has mirror symmetry about the diagonal. The general form is thus: $$\left( {\matrix{ {{\sigma \_{11}}} & {{\sigma \_{12}}} & {{\sigma \_{31}}} \cr {{\sigma \_{12}}} & {{\sigma \_{22}}} & {{\sigma \_{23}}} \cr {{\sigma \_{31}}} & {{\sigma \_{23}}} & {{\sigma \_{33}}} \cr } } \right)$$ or, in an alternative notation, $$\left( {\matrix{ {{\sigma \_{xx}}} & {{\tau \_{xy}}} & {{\tau \_{zx}}} \cr {{\tau \_{xy}}} & {{\sigma \_{yy}}} & {{\tau \_{yz}}} \cr {{\tau \_{zx}}} & {{\tau \_{yz}}} & {{\sigma \_{zz}}} \cr } } \right)$$ The general stress tensor has six independent components and could require us to do a lot of calculations. To make things easier it can be rotated into the **principal stress tensor** by a suitable change of axes. ![](../images/divider400.jpg) ### Principal stresses The magnitudes of the components of the stress tensor depend on how we have defined the orthogonal x1, x2 and x3 axes. For every stress state, we can , so that the only non-zero components of the stress tensor are the ones along the diagonal: $$\left( {\matrix{ {{\sigma \_1}} & 0 & 0 \cr 0 & {{\sigma \_2}} & 0 \cr 0 & 0 & {{\sigma \_3}} \cr } } \right)$$ that is, there are no shear stress components, only normal stress components. This is an example of a **principal stress tensor** of all the tensors we could use to express the stress state that exists. The elements σ1, σ2, σ3 are the **principal stresses.** The positions of the axes now are the **principal axes**. While it may be that σ1 > σ2 > σ3, it only matters that the x1, x2 and x3 axes define the directions of the principal stresses. The largest principal stress is bigger than any of the components found from any other orientation of the axes. Therefore, if we need to find the largest stress component that the body is under, we simply need to diagonalise the stress tensor. Remember – we have not changed the stress state, and we have not moved or changed the material – we have simply rotated the axes we are using and are looking at the stress state seen with respect to these new axes. ![](../images/divider400.jpg) ### Hydrostatic and deviatoric components The stress tensor can be separated into two components. One component is a **hydrostatic** or **dilatational** stress that acts to change the volume of the material only; the other is the **deviatoric** stress that acts to change the shape only. $$\left( {\matrix{ {{\sigma \_{11}}} & {{\sigma \_{12}}} & {{\sigma \_{31}}} \cr {{\sigma \_{12}}} & {{\sigma \_{22}}} & {{\sigma \_{23}}} \cr {{\sigma \_{31}}} & {{\sigma \_{23}}} & {{\sigma \_{33}}} \cr } } \right) = \left( {\matrix{ {{\sigma \_H}} & 0 & 0 \cr 0 & {{\sigma \_H}} & 0 \cr 0 & 0 & {{\sigma \_H}} \cr } } \right) + \left( {\matrix{ {{\sigma \_{11}} - {\sigma \_H}} & {{\sigma \_{12}}} & {{\sigma \_{31}}} \cr {{\sigma \_{12}}} & {{\sigma \_{22}} - {\sigma \_H}} & {{\sigma \_{23}}} \cr {{\sigma \_{31}}} & {{\sigma \_{23}}} & {{\sigma \_{33}} - {\sigma \_H}} \cr } } \right)$$ where the hydrostatic stress is given by \({\sigma \_H}\) = \({1 \over 3}\)\(\left( {{\sigma \_1} + {\sigma \_2} + {\sigma \_3}} \right)\). In crystalline metals plastic deformation occurs by slip, a volume-conserving process that changes the shape of a material through the action of shear stresses. On this basis, it might therefore be expected that the yield stress of a crystalline metal does not depend on the magnitude of the hydrostatic stress; this is in fact exactly what is observed experimentally. In amorphous metals, a very slight dependence of the yield stress on the hydrostatic stress is found experimentally.   Finding the principal stress tensor = <! .style4 {color: #0000FF} > ### Rotating the axes: The principal stresses are the eigenvalues of the stress tensor. These can be found from the determinant equation: $$\left| {\begin{array}{\*{20}{c}} {{\sigma \_{11}} - \xi }&{{\sigma \_{12}}}&{{\sigma \_{13}}}\\ {{\sigma \_{21}}}&{{\sigma \_{22}} - \xi }&{{\sigma \_{23}}}\\ {{\sigma \_{31}}}&{{\sigma \_{32}}}&{{\sigma \_{33}} - \xi } \end{array}} \right| = 0$$ This determinant is expanded out to produce a cubic equation from which the three possible values of \(\xi \) can be found; these values are the principal stresses. This is discussed in the book by J.F. Nye . If the stress tensor already has a principal stress along one axis, such as σ33, diagonalising is much simpler: $$\left| {\begin{array}{\*{20}{c}} {{\sigma \_{11}} - \xi }&{{\sigma \_{12}}}&0\\ {{\sigma \_{21}}}&{{\sigma \_{22}} - \xi }&0\\ 0&0&{{\sigma \_{33}} - \xi } \end{array}} \right| = 0$$ When we expand this out, we find that: \[({\sigma \_{33}} - \xi )\left[ {({\sigma \_{11}} - \xi )({\sigma \_{22}} - \xi ) - \sigma \_{12}^2} \right] = 0\] One of the principal stresses must be σ33, and the other two are easy to find by solving the quadratic equation inside the square brackets for \(\xi \). Alternatively, when there are only two principal stresses to find, such as in this example, we can use Mohrs circle. ![](../images/divider400.jpg) ### Mohrs circle method: Mohrs circle in this situation represents a stress state, on two axes – normal (σ) and shear (τ). A Mohr's circle drawn according to the convention in Gere and Timoshenko in shown below.   ![](figures/mohr_circle_sml.png) The normal stresses σ*x* and σy are first plotted on the horizontal σ axis at A and B. Positions C and D are then generated using the magnitude of the shear stress τxy, with the convention for the choice of these positions shown. Different orientations of the axes, and the different stress tensors produced by them, are represented by the different diameters it is possible to take of the circle. The principal stress state is the state which has no shear components. This corresponds to the diameter of the Mohrs circle that has no component along the shear axis – it is the diameter that runs along the normal stress axis. The principal stresses are thus the two points where the circle crosses the normal stress axis, E and F: $$\left( {\begin{array}{\*{20}{c}} E&0&0\\ 0&F&0\\ 0&0&{{\sigma \_3}} \end{array}} \right)$$ The angle 2θ shown on the Mohr's circle in an anti-clockwise sense is twice the angle θ required to rotate the set of axes in an anti-clockwise sense from the old set of axes to the principal axes with respect to which the principal stresses are defined. The Mohrs circle below is for an element under a stress state of σ11= 80 MPa, σ22 = – 60 MPa, σ12 = 50 MPa and σ3 = 100 MPa. Using the slider, change its inclination angle and compare it to the tensor representing the stress state. Given below is an interactive tool to plot a Mohr's circle according to user's specified stress states. The strain tensor =<! .style4 {color: #0000FF} > When stress is applied to the material, strain is produced. Strain is also a symmetric second-rank tensor. Stress and strain are related by: σ*ij* = C*ijkl*ε*kl* The strain tensor, ε*kl*, is second-rank just like the stress tensor. The tensor that relates them, C*ijkl*, is called the **stiffness tensor** and is fourth-rank. Alternatively: ε*ij*= S*ijkl*σ*kl* S*ijkl* is called the and is also fourth-rank. The strain tensor is a field tensor – it depends on external factors. The compliance tensor is a matter tensor – it is a property of the material and does not change with external factors.   ![](../images/divider400.jpg) ### Expressing the strain in a slip process in terms of displacement ![](figures/strain_angle.png) This diagram shows a plane on which slip occurs. A general point P is moved to position P' by the slip. The vectors from the origin to P and P' are r and r' respectively. Also shown is the unit vector n > normal to the plane, and β perpendicular to the plane from O. The length of the perpendicular from O to the plane is simply r·n. Unit vector n has components of n1, n2, n3, and r has components of x1, x2, x3. Suppose the distance moved in the direction of slip is \(\gamma \left( {\underset{\raise0.3em\hbox{$\smash{\scriptscriptstyle-}$}}{r} \cdot \underset{\raise0.3em\hbox{$\smash{\scriptscriptstyle-}$}}{n} } \right)\). The displacement vector that represents slip is then given by: $$\underset{\raise0.3em\hbox{$\smash{\scriptscriptstyle-}$}}{r} ' - \underset{\raise0.3em\hbox{$\smash{\scriptscriptstyle-}$}}{r} = \gamma \left( {\underset{\raise0.3em\hbox{$\smash{\scriptscriptstyle-}$}}{r} \cdot \underset{\raise0.3em\hbox{$\smash{\scriptscriptstyle-}$}}{n} } \right)\underset{\raise0.3em\hbox{$\smash{\scriptscriptstyle-}$}}{\beta } $$ where β is a unit vector in the direction of slip and has components of b1, b2 and b3. If the strain angle ![](eqn/eq0033M.gif) is small, the components of the deformation tensor *e**ij* can be obtained by differentiating the displacements, so that $${e\_{ij}} = {{\partial {u\_i}} \over {\partial {x\_j}}} = {\partial \over {\partial {x\_j}}}\gamma \left( {{\underset{\raise0.3em\hbox{$\smash{\scriptscriptstyle-}$}}{r} \cdot \underset{\raise0.3em\hbox{$\smash{\scriptscriptstyle-}$}}{n} } } \right){\beta \_i}$$ Hence, for example, $${e\_{11}} = {{\partial {u\_1}} \over {\partial {x\_1}}} = {\partial \over {\partial {x\_1}}}\gamma \left( {{\underset{\raise0.3em\hbox{$\smash{\scriptscriptstyle-}$}}{r} \cdot \underset{\raise0.3em\hbox{$\smash{\scriptscriptstyle-}$}}{n} } } \right){\beta \_1} = \gamma {\beta \_1}{\partial \over {\partial {x\_1}}}\left( {{x\_1}{n\_1} + {x\_2}{n\_2} + {x\_3}{n\_3}} \right) = \gamma {n\_1}{\beta \_1}$$ More generally, $${e\_{ij}} = {{\partial {u\_i}} \over {\partial {x\_j}}} = \gamma {n\_j}{\beta \_i}$$ We can then write the tensor like this: $${e\_{ij}} = \gamma \left( {\matrix{ {{n\_1}{\beta \_1}} & {{n\_2}{\beta \_1}} & {{n\_3}{\beta \_1}} \cr {{n\_1}{\beta \_2}} & {{n\_2}{\beta \_2}} & {{n\_3}{\beta \_2}} \cr {{n\_1}{\beta \_3}} & {{n\_2}{\beta \_3}} & {{n\_3}{\beta \_3}} \cr } } \right)$$ $${\varepsilon \_{ij}} = \gamma \left( {\matrix{ {{n\_1}{\beta \_1}} & {{1 \over 2}\left( {{n\_2}{\beta \_1} + {n\_1}{\beta \_2}} \right)} & {{1 \over 2}\left( {{n\_3}{\beta \_1} + {n\_1}{\beta \_3}} \right)} \cr {{1 \over 2}\left( {{n\_1}{\beta \_2} + {n\_2}{\beta \_1}} \right)} & {{n\_2}{\beta \_2}} & {{1 \over 2}\left( {{n\_3}{\beta \_2} + {n\_2}{\beta \_3}} \right)} \cr {{1 \over 2}\left( {{n\_1}{\beta \_3} + {n\_3}{\beta \_1}} \right)} & {{1 \over 2}\left( {{n\_2}{\beta \_3} + {n\_3}{\beta \_2}} \right)} & {{n\_3}{\beta \_3}} \cr } } \right)$$   ![](../images/divider400.jpg) ### Separation of the strain tensor Notice that the tensor derived from the diagram is *e**ij* while the strain tensor related to the stress tensor by the stiffness and compliance tensors is ε*ij*. This is not a mistake! The tensor *e**ij* derived from the diagram describes the specimen moving relative to the origin. This includes a change in dimension of the specimen, the strain. It also may include a rotation of the specimen. In terms of the properties of the material, the rotation is not of interest, so we must separate it out to be left with the strain alone. *e**ij* = ε*ij* + ω*ij* where ε*ij* is the strain tensor and ω*ij* is the rotation tensor.   $${\omega \_{ij}} = \gamma \left( {\matrix{ 0 & {{1 \over 2}\left( {{n\_2}{\beta \_1} - {n\_1}{\beta \_2}} \right)} & {{1 \over 2}\left( {{n\_3}{\beta \_1} - {n\_1}{\beta \_3}} \right)} \cr {{1 \over 2}\left( {{n\_1}{\beta \_2} - {n\_2}{\beta \_1}} \right)} & 0 & {{1 \over 2}\left( {{n\_3}{\beta \_2} - {n\_2}{\beta \_3}} \right)} \cr {{1 \over 2}\left( {{n\_1}{\beta \_3} - {n\_3}{\beta \_1}} \right)} & {{1 \over 2}\left( {{n\_2}{\beta \_3} - {n\_3}{\beta \_2}} \right)} & 0 \cr } } \right)$$ A strain tensor must be symmetrical. A rotation tensor must be antisymmetric. The rotation tensor must also have no normal components. The strain tensor ε*ij* is the one used in calculations.   ![](../images/divider400.jpg) ### Volumetric strain The sum of the diagonal elements of the strain tensor is the **volumetric strain** or **dilatation:** $${{\Delta V} \over V} = {\varepsilon \_{11}} + {\varepsilon \_{22}} + {\varepsilon \_{33}} = \Delta $$ The volumetric strain for metals during plastic deformation is zero. Hence, during plastic deformation there are five independent components, rather than six, of the general strain tensor, describing an incremental change of shape.   Yield criteria for metals = <! .style1 {color: #0000FF} .style6 {font-family: "Times New Roman", Times, serif} .style7 { font-family: Symbol; font-style: italic; } .style9 {font-family: "Times New Roman", Times, serif; font-style: italic; } > A **yield criterion** is a hypothesis defining the limit of elasticity in a material and the onset of plastic deformation under any possible combination of stresses. There are several possible yield criteria. We will introduce two types here relevant to the description of yield in metals. To help understanding of combinations of stresses, it is useful to introduce the idea of principal stress space. The orthogonal principal stress axes are not necessarily related to orthogonal crystal axes. ![](figures/ortho_axes.jpg) Using this construction, *any* stress can be plotted as a point in 3D stress space. For example, the uniaxial stress \(\left( {\begin{array}{\*{20}{c}} \sigma &0&0\\ 0&0&0\\ 0&0&0 \end{array}} \right)\) where σ1 = σ;  σ2 = σ3= 0, plots as a point on the σ1axis. A purely hydrostatic stress σ1 = σ2 = σ3=σ*H*  will lie along the vector [111] in principal stress space. For any point on this line, there can be no yielding, since in metals, it is found experimentally that hydrostatic stress does not induce plastic deformation (see ). ![](figures/hydrostatic.jpg)*The 'hydrostatic line'* We know from , that if σ1 = Y, σ2 = σ3 = 0 where Y is a uniaxial stress, then yielding will occur. Therefore, there must be a surface, which surrounds the hydrostatic line and passes through (Y, 0, 0) that defines the boundary between elastic and plastic behaviour. This surface will define a yield criterion. Such a surface has also to pass through the points (0, Y, 0), (0, 0, Y), (–Y, 0, 0) (0, –Y, 0) and (0, 0, –Y). The plane defined by the three points (Y, 0, 0), (0, Y, 0) and (0, 0, Y) is parallel to the plane defined by the three points (–Y, 0, 0) (0, –Y, 0) and (0, 0, –Y). The simplest shape for a yield criterion satisfying these requirements is a cylinder of appropriate radius with an axis along the hydrostatic line. This can be described by an equation of the form: \[{\left( {{\sigma \_1} - {\sigma \_2}} \right)^2} + {\left( {{\sigma \_2} - {\sigma \_3}} \right)^2} + {\left( {{\sigma \_3} - {\sigma \_1}} \right)^2} = {\rm{constant}}\] From above, if, σ1 = Y, σ2 = σ3= 0, then the constant is given by 2Y2. This is the **von Mises Yield Criterion**. We can also define a yield stress in terms of a pure shear, k. A pure shear stress can be represented in a Mohrs Circle, as follows: ![](figures/pure_shear_sml.png) Referred to principal stress space, we have σ1 = k, σ2 = –k, σ3= 0. The von Mises criterion can therefore be expressed as: \[2{Y^2} = 6{k^2}{\rm{ }} \Rightarrow {\rm{ }}Y = k\sqrt 3 \]   A mathematically simpler criterion which satisfies the requirements for the yield surface having to pass through (Y, 0, 0), (0, Y, 0) and (0, 0, Y) is the **Tresca Criterion**. If we suppose σ1 > σ2> σ3, then the largest difference between principal stresses is given by (σ1 – σ3). If yielding occurs when σ1 = Y, σ2 = σ3= 0, then (σ1 – σ3) = Y. For yield in pure shear at some shear stress k, when referred to the principal stress state we could have \[{\sigma \_1} = k,{\rm{ }}{\sigma \_2} = 0,{\rm{ }}{\sigma \_3} = - k{\rm{ }} \Rightarrow {\rm{ }}Y = 2k\] The Tresca criterion is (σ1 – σ3) = Y = 2k. Viewed down the hydrostatic line, the two criteria appear as: ![](figures/tresca.jpg) Yield criteria for non-metals = <! .style1 {color: #0000FF} > When ceramics deform plastically (usually only at temperatures very close to their melting point, if at all), they often obey the von Mises or Tresca criterion. However, other materials such as polymers and geological materials (rocks and soils) display yield criteria that are *not* independent of hydrostatic pressure. Empirically, it is found that as a hydrostatic pressure is increased, the yield stress increases, and so we do not expect a yield criterion based solely on the deviatoric component of stress to be valid. The first attempt to produce a yield criterion incorporating the effect or pressure was derived by Coulomb. ![](../images/divider400.jpg) ### The Coulomb criterion Failure occurs when the shear stress, τ, on any plane reaches a critical value, τc, which varies linearly with the stress normal to that plane. \[{\tau \_c} = {\tau ^\*} - {\sigma \_n}\tan \phi \] where ![](eqn/eq0075M.gif) is the positive normal stress on the plane of failure (so that on this convention, a compressive stress or pressure is a negative quantity), ![](eqn/eq0076M.gif) is a material parameter and ![](eqn/eq0077M.gif) is the angle of shearing resistance. (Note: ![](eqn/eq0078M.gif) is not a ‘coefficient of friction although often referred to as such). ![](figures/coulomb.jpg)Failure locus for soil For soils, tanφ ≈ 0.5 - 0.6 typically. So as a soil is compressed, τc increases. If the **principal components** of stress are σ1, σ2, σ3 for a particular stress state at some point within a soil mass, we can draw three Mohrs circles with diameters specified by σ1 > σ3 ; σ2 > σ3 ; σ1> σ2. For failure, we require only one of these to touch the failure locus, e.g. ![](figures/coulomb_3D.jpg) Here, failure is determined by \(\left| {{\sigma \_1} - {\sigma \_3}} \right|\), not by σ2. This is therefore a variant or modification of the Tresca criterion. ![](../images/divider400.jpg)![](../images/divider400.jpg) A better model for polymers is to assume that the shear stress k at which failure occurs is a function of hydrostatic stress or pressure, e.g. $$k = {k\_0} + \mu P$$ where \(P = - \)\(\frac{1}{3}\)\(\left( {{\sigma \_1} + {\sigma \_2} + {\sigma \_3}} \right) = - {\sigma \_H}\) = hydrostatic pressure and k0 is the value of shear yield stress at zero hydrostatic pressure. If we do this, we obtain pressure-modified criteria, which work well for polymers. For example, the pressure-modified von Mises criterion has a circular sectional cone with its axis along σ1 = σ2 = σ3: $${\left( {{\sigma \_1} - {\sigma \_2}} \right)^2} + {\left( {{\sigma \_2} - {\sigma \_3}} \right)^2} + {\left( {{\sigma \_3} - {\sigma \_1}} \right)^2} = 2{Y^2} = 6{k^2} = 6\left( {{k\_o} + \mu P} \right)$$ The pressure-modified Tresca criterion has a hexagonal pyramid with axis along σ1 = σ2 = σ3: $$\left( {{\sigma \_1} - {\sigma \_3}} \right) = Y = 2k = 2\left( {{k\_0} + \mu P} \right)$$ These modified criteria work well for polymers. Summary = * Stress and strain and the relationship between them can be expressed in tensor formalism. * The stress tensor is symmetric and can be separated into hydrostatic and deviatoric components. * The stress state can be expressed by a tensor that has only diagonal components – the principal stress tensor. This is achieved by rotating the axes of the stress tensor, so that the axes are parallel to the forces on the body. * The measured strain tensor can be separated into a symmetric real strain tensor and an antisymmetric rotation tensor. The real strain tensor can then be separated into dilatational (volume expansion) and deviatoric (shape change) components. * We can define combinations of the three principal stress components that will cause yield – **yield criteria**. Different criteria are best used for different materials. The best one for metals is the von Mises yield criterion: $${({\sigma \_1} - {\sigma \_2})^2} + {({\sigma \_2} - {\sigma \_3})^2} + {({\sigma \_3} - {\sigma \_1})^2} = 6{k^2} = 2{Y^2}$$ A mathematically simpler approximation to the von Mises yield criterion is the Tresca yield criterion: $$\frac{{\left( {{\sigma \_1} - {\sigma \_3}} \right)}}{2} = k = \frac{Y}{2}$$* If a yield criterion is plotted in 3D stress space, we have a **yield surface.** Questions = ### Quick questions*You should be able to answer these questions without too much difficulty after studying this TLP. If not, then you should go through it again!*1. Use the interactive Mohr's circle below to help you with this question. Which of these stress states is not the same as the others? | | | | | - | - | - | | | a | | | | b | | | | c | | | | d | | 2. What kind of movement are these tensors - rotation, strain or both? Click on each and check the answer. | | | | | - | - | - | | | a | | | | b | | | | c | | | | d | | 3. The yield stress of an aluminium alloy in uniaxial tension is 320 MPa. The same alloy also yields under the combined stress state:![](eqn/eqn_questions/eq0011M.gif) Is the behaviour of this alloy better described by the von Mises or the Tresca yield criterion? Going further = ### Books> > > > > > [1] J.F. Nye, *Physical Properties of Crystals*, Oxford, 1985. > > > > > > [2] G.E. Dieter, *Mechanical Metallurgy*, 3rd Edition, McGraw-Hill, 1990. > > > > > > [3] D.R. Lovett, *Tensor Properties of Crystals*, 2nd Edition, Adam Hilger, 1999. > > > > > > [4] B.J. Goodno and J,M, Gere, *Mechanics of Materials*. 9th Edition, Cengage, 2018. > > > > > > [5] A. Kelly and K.M. Knowles, *Crystallography and Crystal Defects*, 3rd Edition, Wiley, 2020. > > > > > > > > >
Aims On completion of this TLP you should: * Be aware of major deformation processes and the ways in which they are used to form metallic objects: + Rolling + Extrusion + Forging + Drawing * Appreciate the advantages and disadvantages of each. * Understand the reasons for selecting a process for a particular shape of end product. Before you start A basic understanding of plastic deformation (including work hardening) is assumed and can be found in the . Introduction <! .style1 {color: #0000FF} .style2 { font-family: "Times New Roman", Times, serif; font-style: italic; } > <! function MM\_openBrWindow(theURL,winName,features) { //v2.0 window.open(theURL,winName,features); } //> A forming operation is one in which the shape of a metal sample is altered by plastic deformation. Forming processes include stamping, rolling, extrusion and forging, where deformation is induced by external compressive forces or stresses exceeding the yield stress of the material. Drawing is a fundamentally different process in that the external forces are tensile in nature and hence the ultimate tensile strength of the material cannot be exceeded. Metals or alloys used in forming processes require a moderate level of ductility to enable plastic deformation with no fracture. ![some formed articles](figures/formed_articles_sml.jpg) Some formed articles Forming can be divided into two categories: ![](../images/divider400.jpg) ### Hot Working ![Large hot ingot, image provided Harry Bhadeshia](figures/hot_ingot_sml.jpg) Hot ingots from web page Deformation is carried out at a temperature high enough for fast recrystallisation to occur. A crude estimate for a hot working temperature T for a particular metal or alloy is that it must be greater than 0.6Tm where Tm is the melting point in degrees Kelvin. This lower bound for the hot working temperature varies for different metals, depending on factors such as purity and solute content. Thus, a highly pure metal will undergo recovery and recrystallisation at a particular hot working temperature more readily than an alloyed metal. Deformation energy requirements for hot working are less than that of . At hot working temperatures, a metal remains ductile through dynamic reforming of its grain structure, so repeated, large deformations are possible. The strain rates of many metal-working processes are so high that there is insufficient time for the metal to recrystallise as it deforms. However, recovery and recrystallisation do occur in the time period between repeated hot working operations. Hot working achieves both the mechanical purpose of obtaining the desired shape and also the purpose of improving the physical properties of the material by destroying its original cast structure. The porous cast structure, often with a low mechanical strength, is converted to a wrought structure with finer grains, enhanced ductility and reduced porosity. Depending on the final hot working temperature, an annealed microstructure can be obtained. At elevated temperatures, most metals experience some surface oxidation, which results in a poor surface finish as well as a loss of material. Processing in an inert atmosphere is possible, but it is very expensive and is usually avoided unless the metal is very reactive. ![](../images/divider400.jpg) ### Cold Working This is the term for processes that are performed at room temperature (or up to about 200°C for some metals). Cold working leads to anisotropy and increased stiffness and strength in a metal. There is a corresponding decrease in ductility and malleability as the metal strain hardens. Advantages over include a better quality surface finish, closer dimensional control of the final article and improved mechanical properties. Cold working processes can be divided into two broad classes: 1. Those in which cold working is carried out for the purpose of shaping the article only. * Here, any strain hardening effects are not desired and may have to be removed by annealing both between the various stages of plastic shaping as well as after the final cold working shaping operation. 2. Those in which the object of cold rolling is not only to obtain the required shape but also to strain harden and strengthen the metal. * Plastic deformation must not be carried beyond a certain point or brittle fracture is likely to result. In order to avoid this, total deformation can be accomplished in a series of steps in which the article is successively cold worked by a small amount and then in order to reduce hardness and increase ductility, thereby permitting further cold working as required.   Rolling = <! .style1 {color: #0000FF} .style2 {color: #000000} > <! function MM\_openBrWindow(theURL,winName,features) { //v2.0 window.open(theURL,winName,features); } //> Rolling is the most widely used deformation process. It consists of passing metal between two rollers, which exert compressive stresses, reducing the metal thickness. Where simple shapes are to be made in large quantity, rolling is the most economical process. Rolled products include sheets, structural shapes and rails as well as intermediate shapes for wire drawing or forging. Circular shapes, ‘I beams and railway tracks are manufactured using grooved rolls.   ![rolling](figures/rolling_sml.jpg) Rolling   ### ### Hot Rolling Initial breakdown of an ingot or a continuously cast slab is achieved by hot rolling. Mechanical strength is improved and porosity is reduced. The worked metal tends to oxidise leading to which results in a poor surface finish and loss of precise dimensions. A hot rolled product is often to remove scale, and further rolled cold to ensure a good surface finish and optimise the mechanical properties for a given application. **HOT ROLLING** Your browser does not support the video tag. Reproduced from Materials Selection and Processing CD, by A.M.Lovatt, H.R.Shercliff and P.J.Withers.![](../images/divider400.jpg) ### Cold Rolling Cold rolling is often used in the final stages of production. Sheets, strips and foils are cold rolled to attain dimensional accuracy and high quality surface finishes. With softer metals such as lead and copper, a succession of cold-rolling passes can impose very large deformations. For many materials, however, the rolling sequence has to be interrupted for intermediate annealing in order to prevent fracture.   Forging = <! .style1 {color: #0000FF} .style2 {color: #000000} > <! function MM\_openBrWindow(theURL,winName,features) { //v2.0 window.open(theURL,winName,features); } //> In this operation, a single piece of metal, normally hot, is deformed mechanically by the application of successive blows or by continuous squeezing. Forged articles range in size from nuts and bolts, hip replacement prostheses and crankshafts to (traditionally) gun barrels. Most engineering metals and alloys can be forged readily and include most steels, aluminium and copper alloys and certain titanium alloys including 6-4 (Ti-6 wt.%Al-4 wt%V) and 6-2-4-2 (Ti-6 wt.%Al-2 wt.%Sn-4 wt.%Zn-2 wt.%Mo). Strain-rate and temperature-sensitive materials, such as magnesium and nickel based superalloys, may require more sophisticated forging processes such as . Forged articles have excellent mechanical properties, combining fine grain structure with strengthening through strain hardening. ![](../images/divider400.jpg) ### Closed Die ![closed die forging](figures/closeddieforge_sml.jpg) Closed-die forging A force is brought to bear on a metal slug or preform placed between two (or more) die halves. The metal flows plastically into the cavity formed by the die and hence changes in shape to its finished shape. Examples of the machinery used include , and hammers. **CLOSED DIE FORGING** Your browser does not support the video tag. Reproduced from Materials Selection and Processing CD, by A.M.Lovatt, H.R.Shercliff and P.J.Withers. Possible geometries range from simple spherical blocks and discs to intricate components incorporating thin webs, holes, cavities, pockets and ribs. As metal flow is restricted by the die contours, closed-die forging can produce complex shapes and higher tolerances than the shapes and tolerances achieved using open-die forging processes. ![](../images/divider400.jpg) ### Open Die ![open die forging](figures/opendieforge_sml.jpg) Extrusion = <! .style1 {color: #0000FF} .style2 { font-family: "Times New Roman", Times, serif; font-style: italic; } > In extrusion, a bar or metal is forced from an enclosed cavity via a die orifice by a compressive force applied by a ram. Since there are no tensile forces, high deformations are possible without the risk of fracture of the extruded material. The extruded article has the desired, reduced cross-sectional area, and also has a good surface finish so that further machining is not needed. Extrusion products include rods and tubes with varying degrees of complexity in cross-section.   ![extrusion](figures/extrusion_sml.jpg) Extrusion   Examples of metals that can be extruded include lead, tin, aluminium alloys, copper, brass and steel. The minimum cross-sectional dimensions for extruded articles are approximately 3 mm in diameter for steel and 1 mm in diameter for aluminium. Some metals such as lead alloys and brass lend themselves to extrusion rather than drawing or rolling. **EXTRUSION** Your browser does not support the video tag. Reproduced from Materials Selection and Processing CD, by A.M.Lovatt, H.R.Shercliff and P.J.Withers.  Hot extrusion is carried out at a temperature T of approximately 0.6Tm and the pressures required range from 35 to 700 MPa. Under these demanding conditions, a lubricant is required to protect the die. Oil and graphite lubricants function well at temperatures up to 150°C, but borate glass or hexagonal boron nitride powders are favoured at higher temperatures where carbon-based lubricants oxidise. ![Extruded products](figures/extruded_products_sml.jpg) Extruded products Cold extrusion is performed at temperatures significantly below the melting temperature of the alloy being deformed, and generally at room temperature. The process can be used for most materials, provided that sufficiently robust machinery can be designed. Products of cold extrusion include aluminium cans, collapsible tubes and gear blanks.   Drawing = <! .style1 {color: #0000FF} > Drawing is the pulling of a metal piece through a die by means of a tensile force applied to the exit side. A reduction in cross-sectional area results, with a corresponding increase in length. A complete drawing apparatus may include up to twelve dies in a series sequence, each with a hole a little smaller than the preceding one. In multiple-die machines, each stage results in an increase in length and therefore a corresponding increase in speed is required between each stage. This is achieved using “capstans” which are used both to apply the tensile force and also to accommodate the increase in the speed of the drawn wire. These speeds may reach 60 ms–1. Dies must be very hard so they tend to be made from steel or chilled cast iron. However, tungsten carbide and even diamond are increasingly used because of their greater ability to retain shape. A typical lubricant used for drawing is tallow, a soap/fat paste-type material that has a formulation of 5 wt% soap, 25 wt% oil, 25 wt% water, and 45 wt% solids.   ![drawing](figures/drawing_sml.jpg) Drawing   Metals can be formed to much closer dimensions by drawing than by rolling. Shapes ranging in size from the finest wire to those with cross-sectional areas of many square centimetres are commonly drawn. Larger artefacts may be drawn to square, round and even irregular cross sections. Drawn products include wires, rods and tubing products. Large quantities of steel and brass are cold drawn. Seamless tubing can be produced by cold drawing when thin walls and very accurate finishes are required.   Other processes =<! .style1 {color: #0000FF} >### Stamping Stamping is used to make high volume parts such as aviation or car panels or electronic components. Mechanical or hydraulic powered presses stamp out parts from continuous sheets of metal or individual blanks. The upper die is attached to the ram and the lower die is fixed. Whereas mechanical machinery transfers all energy as a rapid punch, hydraulic machinery delivers a constant, controlled force.   ![stamping](figures/stamping_sml.jpg)Stamping ![](../images/divider400.jpg) ### Deep Drawing For deep drawing, the starting sheet of metal is larger than the area of the punch. A pressure plate, fixed to the machine, prevents wrinkling of the edges as the plug is drawn into a top die cavity. The outer parts of the sheet are drawn in towards the die as the operation proceeds. The process is limited by the possibility of fracture occurring during drawing; the maximum sheet width is rarely more that twice the die diameter. Many shapes are possible including cups, pans, cylinders and irregular shaped products.   ![](figures/deepdrawing_sml.jpg)Deep drawing ![](../images/divider400.jpg) ### Pressing A sheet of metal is deformed between two suitably shaped dies usually to produce a cup or dish shaped component. A thick pad of rubber may replace one of the dies, giving reduced tooling costs and allowing larger deformations to be imposed. ![pressing](figures/pressing_sml.jpg)Pressing   Summary = The production of the vast majority of metallic objects at some stage involves one of these four deformation processes: * Rolling * Extrusion * Forging * Drawing The intention of this TLP is to introduce these four processes and to provide a brief description of their strengths, limitations and suitability for various applications. Questions = ### Quick questions*You should be able to answer these questions without too much difficulty after studying this TLP. If not, then you should go through it again!*1. What would happen to a brittle metal such as white cast iron, if it were formed by closed die forging? 2. Discuss how the metal pieces of this article were made. ![](figures/filing_cabinet_sml.jpg)### Deeper questions*The following questions require some thought and reaching the answer may require you to think beyond the contents of this TLP.*3. The melting temperature of a low-carbon steel is 1534°C. Above what temperature can we use hot working to form it, and why? 4. What processes could be used to make this shape? What else could affect the choice of process? ![](figures/drawn_cup_sml.jpg) 5. At 30 ms–1, what weight of 1 mm diameter copper wire could be drawn in an hour? The density of copper is 8920 kg m–3. 6. Do you think steel reinforcing bars for concrete are drawn or extruded? Going further = ### CD-ROM * Lovatt A.M., Shercliff H.R. and Withers P.J. (2000), "Material Selection and Processing", CD-ROM and supporting booklets, Technology Enhancement Programme (part of Gatsby Technical Education Project), London.
Aims * Appreciate different approaches, including their inherent assumptions, that can be used to model plastic flow in metal forming operations.* Gain an insight into these approaches and how they can provide estimates of the necessary deformation loads for different metal forming operations.* Recognise the relative importance and limitations of the different approaches, ranging from simple work analyses to hodographs, and to become familiar with their use.  Before you start You should be familiar with the contents of the TLP and the concepts of stress and strain.   Introduction After extracting metal from their ores and adding different elements to obtain a precise alloy optimised for a given end usage, the alloy often starts as a relatively large billet from which objects have to be fabricated.  Hence, the large billets have to be reduced by mechanical deformation processes such as forging, rolling and extrusion, to reduce and change their shapes.  These processes are energy intensive and require expensive machinery.  It would be inappropriate to over-design such machinery since that would be unnecessarily expensive, while under-designing would prevent the alloy from being deformed.  Therefore, it is important to know the required loads needed to achieve the necessary deformations for different alloys.  This TLP examines various approaches that can be used to provide estimates of the loads (forces or stresses) required when deforming metallic objects.  Some of the approaches are two-dimensional, and this introduces the concepts of plane strain and plane stress.  In addition, the precise nature of the alloy will affect its mechanical behaviour and so the idea of homogeneous deformation is also included. The approaches included in this TLP to estimate deformation loads are: * leading to and * * * and * This module addresses how materials deform (change shape) when subjected to applied forces (stresses).  Clearly, the nature of such deformations will depend on the class of material (metal, polymer, glass, etc.) as well as the precise microstructure of the specific material.  In the following, the deformation is assumed to be homogeneous, i.e. it is independent of microstructural features such as grain size, dislocation density and defects.  Thus the properties of the material are assumed to be isotropic.   Lévy-Mises Equations Once the is satisfied, we can no longer expect to use the equations of elasticity. We must develop a theory to predict plastic strains from the imposed stresses. When a body is subjected to stresses of sufficient magnitude, it will plastically deform (or fracture). The nature of the stresses depend on the particular forces applied to the body and, often, the same resulting deformation may be achieved by applying forces in different ways. For instance, a ductile metallic rod may be extended (elongated) a given amount either by a single force along its axis (i.e. a tensile stress) or by the combined action of several forces acting in different directions (i.e. multi-axial loading). A simple example of the latter multi-axial loading situation to obtain the same extension in the metallic rod as that obtained in pure tension is to apply a reduced tensile stress while simultaneously compressing the rod along its length. Under such multi-axial loading, the behaviour of ductile metallic materials can be described by the , which relate the principal components of strain increments during plastic deformation to the principal applied stresses. In general, there will be both . However, to a first approximation, we can ignore the elastic strain assuming that the plastic strains will dominate in a deformation processing situation. We can therefore treat the material as a , i.e. a material which is perfectly rigid prior to yielding and perfectly plastic afterwards. Since plasticity is a form of flow, we can relate the strain rate, \({{{\rm{d}}\varepsilon } \over {{\rm{dt}}}}\) to stress σ. Plastic flow is similar to fluid flow, except that any rate of flow (strain rate) can occur for the same yield stress. From symmetry we can show that in an isotropic body, the principal axes of stress and strain rate coincide, i.e. it goes the way you push it. With respect to principal axes \(\frac{{{{\dot \varepsilon }\_1}}}{{{{\sigma '}\_1}}} = \frac{{{{\dot \varepsilon }\_2}}}{{{{\sigma '}\_2}}} = \frac{{{{\dot \varepsilon }\_3}}}{{{{\sigma '}\_3}}}\), where \({\dot \varepsilon \_i} = \) \(\frac{{{\rm{d}}\varepsilon }}{{{\rm{dt}}}}\)\({\rm{ (}}i = 1,3)\) , the normal strain rate parallel to ith axis. ![](eqn/eqn_levy_mises/Eqn.005.gif) = of normal stress parallel to the ith axis, and \[{\sigma '\_1} = {\sigma \_1} - \frac{1}{3}\left( {{\sigma \_1} + {\sigma \_2} + {\sigma \_3}} \right)\] If we consider small intervals of time δt, and call the resultant changes in strain δε1, δε2, δε3, it follows that, \[\frac{{\delta {\varepsilon \_1}}}{{{\sigma \_1} - \frac{1}{2}\left( {{\sigma \_2} + {\sigma \_3}} \right)}} = \frac{{\delta {\varepsilon \_2}}}{{{\sigma \_2} - \frac{1}{2}\left( {{\sigma \_3} + {\sigma \_1}} \right)}} = \frac{{\delta {\varepsilon \_3}}}{{{\sigma \_3} - \frac{1}{2}\left( {{\sigma \_1} + {\sigma \_2}} \right)}}\] the **Lévy-Mises equations**. As \(\frac{1}{3}\left( {{\sigma \_1} + {\sigma \_2} + {\sigma \_3}} \right)\) is an invariant of the stress tensor, it also turns out that these equations apply even if stresses and strains are not referred to principal axes, so \[\frac{{\delta {\varepsilon \_{11}}}}{{{\sigma \_{11}} - \frac{1}{2}\left( {{\sigma \_{22}} + {\sigma \_{33}}} \right)}} = \frac{{\delta {\varepsilon \_{22}}}}{{{\sigma \_{22}} - \frac{1}{2}\left( {{\sigma \_{33}} + {\sigma \_{11}}} \right)}} = \frac{{\delta {\varepsilon \_{33}}}}{{{\sigma \_{33}} - \frac{1}{2}\left( {{\sigma \_{11}} + {\sigma \_{22}}} \right)}}\] for a general stress tensor and plastic strain increments δε11, δε22 and δε33. The above Lévy-Mises equations describe precisely the relationships between the normal stresses (arising from any general applied stress situation with respect to a particular set of orthogonal axes) and the resulting normal plastic strains (deformation) of a body referred to the same set of orthogonal axes. In many situations, the precise stresses are not known accurately and so more empirical approaches can be very helpful in describing the deformation of a body when subjected to applied forces. A number of these approaches are considered in this TLP. However, several require further constraints, in particular the need to work in two dimensions and this introduces the concepts of and . • Plane stress In plane stress, one of the principal stresses is zero but there are three finite strains. An example of this is the surface of a thin walled, pressurised cylinder, where the principal stress normal to the surface has a value of zero. More generally, plane stress conditions occur in sheet metal forming when a thin sheet is subjected to uniaxial or biaxial tension. ### Plastic deformation in plane stress Consider the uniaxial tensile behaviour of a sheet. ![](figures/plane_stress_sml.png) Plastic flow will result once a critical stress is reached. Due to the constraint of neighbouring elastic material, the plastically deforming material forms in a band across the sheet at a characteristic angle to the axis of loading. \[{\sigma \_1} \ne 0{\rm{ , }}{\sigma \_{\rm{2}}} = {\sigma \_3} = 0\] At the boundary between the elastic and the yielded material, longitudinal strains must match for continuity. Therefore, they must be zero since strain is effectively zero in the elastic regions. Plastic strain along *v* is zero: δεvv= 0 From the Levy-Mises equations \[{{\delta {\varepsilon \_1}} \over {{\sigma \_1}}} = {{\delta {\varepsilon \_2}} \over { - {\raise0.5ex\hbox{$\scriptstyle 1$} \kern-0.1em/\kern-0.15em \lower0.25ex\hbox{$\scriptstyle 2$}}{\sigma \_1}}} = {{\delta {\varepsilon \_3}} \over { - {\raise0.5ex\hbox{$\scriptstyle 1$} \kern-0.1em/\kern-0.15em \lower0.25ex\hbox{$\scriptstyle 2$}}{\sigma \_1}}}\], and so \(\delta {\varepsilon \_1} = - 2\delta {\varepsilon \_2}\) in the plane of the sheet. Hence if we let δε1= +2 units of plastic strain, δε2= –1 unit of plastic strain. Therefore, on a Mohr's circle, we have: ![](figures/mohr_plane_stress_sml.png) and the longitudinal strain δε is zero, i.e. δεvv= 0, at an angle θ with respect to the direction parallel to σ1. From the diagram, \[\cos \left( {180 - 2\theta } \right) = {{0.5} \over {1.5}} = {1 \over 3}\] \[ \Rightarrow {\rm{ }}\theta = {\raise0.5ex\hbox{$\scriptstyle 1$} \kern-0.1em/\kern-0.15em \lower0.25ex\hbox{$\scriptstyle 2$}}{\cos ^{ - 1}}\left( { - {\raise0.5ex\hbox{$\scriptstyle 1$} \kern-0.1em/\kern-0.15em \lower0.25ex\hbox{$\scriptstyle 3$}}} \right) = {54.74^ \circ }\] and so the longitudinal strain increment is δεvv = 0 at an angle of with respect to σ1.   This phenomenon is well known in mild steel. The bands created are known as Lüders bands. These bands require less stress for their propagation than for their formation because of the freeing of dislocations from their solute atmospheres. ![](figures/Luders_band.jpg)     ![](figures/cottrell_sml.png) (Lüders bands formation in steel, contributed by , University of California, Davis.) It is worth noting that Lüders bands occur in certain types of steel, such as low carbon steel (mild steel), but not in other metallic alloys, such as aluminium alloys or titanium alloys. This is because plastic strain localisation is normally suppressed by work hardening, which tends to make plastic flow occur rather uniformly in a metal, particularly in the early stages of plastic flow, i.e., just after yield has taken place. However, in certain types of low carbon steel at room temperature, Cottrell atmospheres of carbon atoms which have been able to segregate preferentially to dislocation cores pin dislocations until the upper yield point is reached. Once the upper yield point is reached, there is a load drop and then a sudden burst of plastic straining at a constant externally applied load, as cascades of dislocations are able escape their Cottrell atmospheres. This is clearly rather specialised behaviour, caused by the ability of carbon atoms to diffuse relatively easily interstitially in these steels, but it is actually necessary behaviour for the formation of Lüders bands. Conventional work hardening in metallic alloys in the early stages of plastic deformation makes any strain localisation (as demonstrated by the formation of Lüders bands) unlikely. This is also the case for pure metals, and for metals at high temperature, where large plastic strains can occur without a significant load increase once plastic deformation begins. Therefore, Lüders bands only form if a limited burst of plastic straining is able to take place at constant load. Mild steels heated to sufficiently high temperatures (> 400 °C) and then tensile tested do not exhibit Lüders bands.   • Plane strain Much deformation of practical interest occurs under a condition that is nearly, if not exactly, one of plane strain, i.e. where one principal strain (say ε3) is zero so that δε3=0. Plane strain is applicable to , and where flow in a particular direction is constrained by the geometry of the machinery, e.g. a well-lubricated die wall. A specific example of this is in rolling, where the major deformation occurs perpendicular to the roll axis. The material becomes thinner and longer but not wider. Frictional stresses parallel to the rolls (i.e. in the width direction) prevent deformation in this direction and hence a plane strain condition is produced where δε3=0. This can be seen in the animation below. **HOT ROLLING** Your browser does not support the video tag. Reproduced from Materials Selection and Processing CD, by A.M.Lovatt, H.R.Shercliff and P.J.Withers.  ![](../images/divider400.jpg) ### Plastic deformation in plane strain Here, one principal strain is zero. Let this be ε3. Then δε3= 0. From the Levy-Mises equation, \[{{\delta {\varepsilon \_1}} \over {{\sigma \_1} - {1 \over 2}\left( {{\sigma \_2} + {\sigma \_3}} \right)}} = {{\delta {\varepsilon \_2}} \over {{\sigma \_2} - {1 \over 2}\left( {{\sigma \_3} + {\sigma \_1}} \right)}} = {{\delta {\varepsilon \_3}} \over {{\sigma \_3} - {1 \over 2}\left( {{\sigma \_1} + {\sigma \_2}} \right)}} \ne 0\] it follows that \({\sigma \_3} = {\textstyle{1 \over 2}}\left( {{\sigma \_1} + {\sigma \_2}} \right)\) in order to avoid \({{\delta {\varepsilon \_1}} \over {{\sigma \_1} - {1 \over 2}\left( {{\sigma \_2} + {\sigma \_3}} \right)}}\) = 0 Hence σ3 is the mean of σ1 and σ2. By convention we define σ1> σ2 σ1> σ3 > σ2. Therefore the maximum shear stress in the σ1- σ2 plane is at 45° to the axes and has magnitude \({{{\sigma \_1} - {\sigma \_2}} \over 2}\) . If we now examine the Tresca and von Mises yield criteria, we find: * Tresca \({{{\sigma \_1} - {\sigma \_2}} \over 2}\) = k = \({Y \over 2}\)    (*k* = shear yield stress and *Y* = uniaxial yield stress) * von Mises \({\left( {{\sigma \_1} - {\sigma \_2}} \right)^2} + {\left( {{\sigma \_2} - {\sigma \_3}} \right)^2} + {\left( {{\sigma \_3} - {\sigma \_1}} \right)^2} = 6{k^2} = 2{Y^2}\) $${\rm{If}\;\;\sigma \_3} = {\textstyle{1 \over 2}}\left( {{\sigma \_1} + {\sigma \_2}} \right),{\rm{ }}{\textstyle{3 \over 2}}{\left( {\sigma {}\_1 - {\sigma \_2}} \right)^2} = 6{k^2} = 2{Y^2}$$ $$ \Rightarrow \left( {{\sigma \_1} - {\sigma \_2}} \right) = 2k = {{2Y} \over {\sqrt 3 }}$$ Therefore, if we have plane strain, the Tresca yield criterion and the von Mises yield criterion have the same result expressed in terms of *k*. It is unnecessary to specify which criterion we are using, provided we use *k*.   ![](../images/divider400.jpg) Consider a metal in uniaxial compression where plastic strain only takes place in the 1-2 plane. There is no friction between the work piece and the die faces. (To achieve this experimentally, a sample should be wide in the 3 direction). ![](figures/pk_lines_sml.png)> \[ \Rightarrow {\rm{ }}{\sigma \_{{\rm{ij}}}} = {\rm{ }}\left( {\matrix{ {{\sigma \_1}} & 0 & 0 \cr 0 & {{\sigma \_2}} & 0 \cr 0 & 0 & {{{{\sigma \_1} + {\sigma \_2}} \over 2}} \cr } } \right)\]   Hydrostatic stress: \[{\sigma \_H} = - p = {{{\sigma \_1} + {\sigma \_2}} \over 2} = {\sigma \_3}\] where *p* is the hydrostatic pressure. So at yield, we have \({{{\sigma \_1} - {\sigma \_2}} \over 2}\) = \(k\) and since \({{{\sigma \_1} + {\sigma \_2}} \over 2}\) = \(- p\), we have \[{\sigma \_1} = - p + k = 0\] \[{\sigma \_2} = - p - k = - 2k\]    since *p**=k* at yield \[{\sigma \_3} = - p\]   So for this example, the stress tensor is \[{\rm{ }}{\sigma \_{{\rm{ij}}}} = {\rm{ }}\left( {\matrix{ { - p} & 0 & 0 \cr 0 & { - p} & 0 \cr 0 & 0 & { - p} \cr } } \right) + \left( {\matrix{ k & 0 & 0 \cr 0 & { - k} & 0 \cr 0 & 0 & 0 \cr } } \right)\] which is the sum of hydrostatic stress (which can vary in magnitude through the object) and deviatoric pure shear stress (which has the same value throughout the material). The directions of maximum shear therefore lie at 45° to *σ1* and *σ2*. These are along which plastic flow occurs. We are avoiding additional complexities such as work hardening by assuming the materials are . Slip Line Field Theory <! .style1 {color: #0000FF} .style4 {font-family: Symbol} .style5 { font-family: "Times New Roman", Times, serif; font-style: italic; } .style6 {font-family: "Times New Roman", Times, serif} .style7 {color: #FF0000} .style8 { font-size: smaller; color: #333333; } .style9 {font-size: smaller} > <! function MM\_openBrWindow(theURL,winName,features) { //v2.0 window.open(theURL,winName,features); } //>This approach is used to model plastic deformation in only for a solid that can be represented as a body. Elasticity is not included and the loading has to be quasi-static. In terms of applications, the approach now has been largely superseded by , as this is not constrained in the same way and for which there are now many commercial packages designed for complex loading (including static and dynamic forces plus temperature variations). Nonetheless, slip line field theory can provide analytical solutions to a number of metal forming processes, and utilises plots showing the directions of maximum shear stress in a rigid-plastic body which is deforming plastically in plane strain. These plots show anticipated patterns of plastic deformation from which the resulting stress and strain fields can be estimated. The earlier analysis of plane strain plasticity in a simple case of uniaxial compression established the basis of slip line field theory, which enables the directions of plastic flow to be mapped out in plane strain plasticity problems. There will always be two perpendicular directions of maximum shear stress in a plane. These generate two orthogonal families of slip lines called α-lines and β-lines. ( for α and β lines.)   ![](figures/slip_lines_field_sml.png)   Experimentally, these lines can be seen in a realistic plastic deformation situation, e.g. * Polyvinyl chloride (PVC) viewed between crossed polars * Nitrogen-containing steels can be etched using Frys reagent to reveal regions of plastic flow in samples such as notched bars and thick walled cylinders. * Under dull red heat in forging, we see a distinct red cross caused by dissipation of mechanical energy on slip planes. To develop slip line field theory to more general plane strain conditions, we need to recognise that the stress can vary from point to point. Therefore, p can vary, but k is a material constant. As a result of this, directions of maximum shear stress and the directions of principal stresses can vary along a slip line.   ![](../images/divider400.jpg) ### Hencky relations These equations arise from consideration of the equilibrium equations in plane strain. p + 2kφ = constant along an α line p – 2kφ = constant along a β line  Click for the derivation. ![](../images/divider400.jpg) We can apply these relations to the classic problem of indentation of a material by a flat punch. This is important in hardness testing, in foundations of buildings and in forging. In more general cases, slip lines do not intersect external boundaries at 45° because of friction. In the extreme case, *sticking friction* occurs (a perfectly rough surface) and slip lines are at 90° to the surface. ![](figures/slip_lines_sticking_friction_SML.png) Slip-line field for compression between a pair of rough parallel platens [] The slip line patterns above are very useful for analysing plane strain deformation in a rigid-plastic isotropic solid. Arrival at this slip-line pattern is rather complex. They are either derived from model experiments in which the slip-line field is apparent or they are postulated from experience of problems with similar geometry. For a slip line to be a valid solution (but not necessarily a unique solution) the stress distribution throughout the whole body, not just in the plastic region, must not violate stress equilibrium nor must it violate the yield criterion outside the slip-line field. The resultant velocity field must also be evaluated to ensure that strain compatibility is satisfied and matter is conserved. These are stringent conditions and mean that obtaining a slip-line field solution is often not simple. Instead, it is useful to take a more simple approach to analysing deformation processing operations where one or another of the stringent conditions are relaxed to give useful approximate solutions for part of the analysis.   Work Formula Method =When deforming a body, work has to be done by the applied forces.  In the simplest case, the work done can be estimated from the magnitude of the applied stress(es) and the extent of the deformation. This is analogous to simple mechanics in which the work done is equal to force applied multiplied by the distance moved.  Clearly, this simple approach assumes, in the first instance, that all the work done by the applied forces results in deformation; this can be called “useful” work.  If it is assumed that all the work done is useful, then the work formula approach leads to an underestimate of the actual forces needed, i.e. a “lower bound”.  In other words, it would not be possible to deform the body if at least those forces were not applied and hence that work was not done.  In practice, frictional forces need to be overcome and heating of the body occurs due to the internal micro-mechanisms of deformation.  This non-useful work is called “redundant” work and estimates can be made to allow for this, and so better approximations for the necessary forces can be made. Consider a uniaxial tensile deformation process: In this process, σ1 = Y, σ2 = σ3 = 0 and Y = uniaxial yield stress.   ![](figures/bar_tension_sml.png)>   Suppose that at some instant, the bar is of length *l* and of cross-sectional area A V = Al If the bar extends by an amount δl, the increment of work done/unit volume, δw, is $${{F\delta l} \over V} = {{YA\delta l} \over {Al}} = \delta w$$ \[\delta w = {{Y\delta l} \over l}\] So if the bar extends from length l0 to l1, total work done/unit volume is \[\int {\delta w} = Y\int\_{{l\_0}}^{{l\_1}} {{{{\rm{d}}l} \over l}} = Y\ln {{{l\_1}} \over {{l\_0}}}\] This can be applied to . In wire drawing, a tensile force is applied to the product rather than a compressive force applied to the billet (such as in for example).   ![](figures/wire_drawing_sml.png)>   To extend the wire by distance l1, we have to feed in a length l0 to the die where A1l1 = A0l0 by conservation of volume. Work done by F = Fl1 Volume of metal = A1l1drawn by l1 Work done \( = Y{A\_1}{l\_1}\ln \) \({{{l\_1}} \over {{l\_0}}}\) = \(F{l\_1}\) \[ \Rightarrow {F \over {{A\_1}}} = \sigma = Y\ln {{{l\_1}} \over {{l\_0}}} = Y\ln {{{A\_0}} \over {{A\_1}}}\] where σ = stress required, the *drawing stress*.   Therefore we can estimate the maximum reduction possible with perfect lubrication. If there is no work hardening, \(\sigma \le Y\) then the maximum reduction occurs when *σ = Y* \( \Rightarrow \ln \) \({{{A\_1}} \over {{A\_0}}}\) = 1    \[ \Rightarrow {{{A\_0}} \over {{A\_1}}} = 2.72\]   (e to 3 s.f.) \[ \Rightarrow {{{A\_1}} \over {{A\_0}}} = 0.37\] 63% is therefore the maximum possible reduction of cross-sectional area with a perfect wire-drawing operation for a material. Work hardening causes this to increase to a slightly higher value. Friction between the wire and the die reduces this value, causing *redundant work* – work in excess of the minimum necessary to cause the deformation.   In practice, the best reduction obtainable with real dies and good lubrication is approximately 50%. To allow for redundant work, we can apply empirical corrections, e.g. by using a value of efficiency, η: \[\eta = {{{\rm{Work\; formula\; estimate}}} \over {{\rm{Actual\; total\; work}}}}\] Typical values of η are: * extrusion         η = 45 – 55% * wire drawing   η = 50 – 65% * rolling               η = 60 – 80% Clearly, the work formula method gives a lower band to the true force required for a given deformation processing operation because we are neglecting ‘redundant work. For metalworking, it is often preferable to have an overestimate of the load required in order to be sure that a given operation or process is possible.   Limit Analysis <! .style1 {color: #0000FF} >The concept of a lower bound has been introduced with reference to the to analyse deformation. This approach generally results in an underestimate of the required load. Clearly, there also will be an “upper bound”, i.e. an overestimate of the load that needs to be applied to effect a given deformation. The two approaches together are called “limit analysis” since the actual loads required will lie between the lower and upper bounds. In practice, limit analysis is much easier to apply to a problem than the slip-line field approach and can be reasonably accurate. The upper bound is particularly useful for the study of metalworking processes in which it is essential to ensure sufficient forces are applied to cause the required deformation. In contrast, the lower bound is valuable in engineering where failure of a component must be avoided and hence an estimate of the minimum collapse load is needed. The approach taken for estimating the upper bound is based on suggesting a likely deformation pattern, i.e. lines along which slip would be expected to occur for a given loading situation. Then the rate at which energy is dissipated by shear along these lines can be calculated and equated to the work done by an (unknown) external force. By refining the geometry of the deformation pattern, the minimum upper bound can be determined. Frictional forces can be accommodated in this approach. The approach utilises , which are self-consistent plots of velocity for different regions within a body being deformed; the different regions are assigned by considering how the overall body will deform for a particular deformation process and their relative velocities are estimated by assuming that the applied external force has unit velocity. For both upper- and lower-bounds, one of the following two conditions has to be satisfied: * *geometrical compatibility* between internal and external displacements or strains. This is usually concerned with kinetic conditions – velocities must be compatible to ensure no gain or loss of material at any point. * *stress equilibrium* i.e. the internal stress fields must balance the externally applied stresses (forces). The basis of limit analysis rests upon two theorems, which can be proved mathematically. In simple terms, these theorems are: 1. *Lower Bound:* any stress system in which the applied forces are just sufficient to cause yielding. 2. *Upper Bound:* Any velocity field that can operate is associated with an upper bound solution. ![](../images/divider400.jpg)     ![](../images/divider400.jpg)   • Notched bar in tension The condition is satisfied when breath, b » h, the depth of the bar. #### Lower-Bound: Find a stress system, e.g. σ = 0 in the length of the bar where there is the notch, σ = 2k elsewhere. ![](figures/example1_lowerbound_sml.png) Therefore, for a breadth b, P = 2khb = load = stress × area #### Upper-Bound: Postulate a suitable simple deformation pattern. ![](figures/example1_upperbound_sml.png) Assume yielding by slip on 45º shear planes with shear yield stress k. Let displacement along shear plane AB = δx. Then internal work done = \( {k.\left| {AB} \right|b\delta x} = k\sqrt 2 bh\delta x\), where the force is \( k\left| {AB} \right|b \) acting on the shear plane AB. Distance moved by the external load \(P = \delta x\cos {45^ \circ }\) = \(\frac{{\delta x}}{{\sqrt 2 }}\) \( \Rightarrow P \) \(\frac{{\delta x}}{{\sqrt 2 }}\) = \(k\sqrt 2 bh\delta x\) \( \Rightarrow P = 2kbh\) So, here we obtain the same result for the upper bound and lower bound \( \Rightarrow P = 2kbh\) is the true failure load, the load required to cause plastic flow.   • Notched bar in plane bending #### Lower-Bound: The area immediately under the notch, above the neutral axis is in tension σ = 2k. The area below the neutral axis is in compression σ = 2k. ![](figures/example2_lowerbound_sml.png) where: *h* = thickness of slab beneath the notch. \(2k.\) \(\frac{h}{2}\)\(.b\) = magnitude of forces in tensile and compressive regions. \(\frac{h}{2}\) = distance between the two. Equating the couples, \(M = \) \(\left( {2k.\frac{h}{2}.b} \right)\) \(\frac{h}{2}\) = \(0.5k{h^2}b\) #### Upper-Bound: Assume failure occurs by sliding around a ‘plastic hinge along a circular arc of length *l* and radius *r*. ![](figures/example2_upperbound_sml.png) If the rotation is δθ, the internal work done \( = k.lb.r\delta \theta \) along one arc. External work *=* *M*δθ by one moment. \[ \Rightarrow M = klbr\] where no assumptions have been made regarding *l* and *r*. The upper bound theorem states that whatever values are taken for *l* and *r* will lead to an upper bound. Clearly we wish to find the lowest possible value. ![](figures/example2_alpha_sml.png)   From the above geometry, \(l = r\alpha \)    and     \(r = \) \(\frac{h}{{2\sin {\raise0.7ex\hbox{$\alpha $} \!\mathord{\left/ {\vphantom {\alpha 2}}\right.\kern 2pt} \!\lower0.7ex\hbox{$2$}}}}\)\[ \Rightarrow M = \frac{{k{h^2}b}}{4}\frac{\alpha }{{{{\sin }^2}{\raise0.7ex\hbox{$\alpha $} \!\mathord{\left/ {\vphantom {\alpha 2}}\right.\kern 1pt} \!\lower0.7ex\hbox{$2$}}}}\] and so to find the lowest possible value of M, we minimise the function \(\frac{\alpha }{{{{\sin }^2}{\raise0.7ex\hbox{$\alpha $} \!\mathord{\left/ {\vphantom {\alpha 2}}\right.\kern 1pt} \!\lower0.7ex\hbox{$2$}}}}\) Let  \(Y = \) \(\frac{\alpha }{{{{\sin }^2}{\raise0.7ex\hbox{$\alpha $} \!\mathord{\left/ {\vphantom {\alpha 2}}\right.\kern 1pt} \!\lower0.7ex\hbox{$2$}}}}\) \(\frac{{{\rm{d}}Y}}{{{\rm{d}}\alpha }} = \frac{1}{{{{\sin }^4}{\alpha \mathord{\left/ {\vphantom {\alpha 2}} \right. \kern 1pt} 2}}}\left\{ {{{\sin }^2}\frac{\alpha }{2} - 2\frac{\alpha }{2}\cos \frac{\alpha }{2}\sin \frac{\alpha }{2}} \right\}\)        = 0      when    \(\sin \frac{\alpha }{2} = \alpha \cos \frac{\alpha }{2}\) \[ \Rightarrow \tan \frac{\alpha }{2} = \alpha \] \[ \Rightarrow M = \frac{{k{h^2}b}}{4}.\frac{1}{{\sin {\raise0.5ex\hbox{$\scriptstyle \alpha $} \kern-0.1em/\kern-0.15em \lower0.25ex\hbox{$\scriptstyle 2$}}\cos {\raise0.5ex\hbox{$\scriptstyle \alpha $} \kern-0.1em/\kern-0.15em \lower0.25ex\hbox{$\scriptstyle 2$}}}} = {\raise0.7ex\hbox{$1$} \!\mathord{\left/ {\vphantom {1 2}}\right.\kern 1pt} \!\lower0.7ex\hbox{$2$}}\frac{{k{h^2}b}}{{\sin \alpha }} \cong 0.69k{h^2}b\] Taking the lower bound and the upper bound as limits, we therefore find \[ \Rightarrow 0.5 \le \frac{M}{{k{h^2}b}} \le 0.69\] This forms a good example of constraining the value of the external force between lower bound and upper bound. It is also a good example of how to produce a lower limit on an upper bound calculation.   • Hodographs I A hodograph is a diagram showing the relative velocities of the various parts of a deformation process. To analyse a complicated deformation process with many shear planes, it is worth looking at the basic equation for the rate of energy dissipation in an upper bound situation in more detail. ![](figures/shear_planes_sml.png) ABCD is distorted into A'B'C'D' by shear along \(\overrightarrow {SS`} \) at a velocity \(\underline {{v\_s}} \) in the metal. Suppose ABCD moves towards the shear plane SS' at a velocity \({v\_1}\) and suppose that there is a pressure P acting on the area al (where l is the dimension out of the plane of the paper) helping to cause this movement. Rate of performance of work externally \( = pal\left| {\underline {{v\_1}} } \right|\) Rate of performance of work internally \( = k\left| {SS`} \right|l\left| {\underline {{v\_s}} } \right|\) since the only internal work assumed to occur is that required to effect the shear deformation so that ABCD → A'B'C'D'. Equating these, \( \Rightarrow pa\left| {\underline {{v\_1}} } \right| = k\left| {SS`} \right|\left| {\underline {{v\_s}} } \right|\) \( \Rightarrow pa = k\left| {SS`} \right|\frac{{\left| {\underline {{v\_s}} } \right|}}{{\left| {\underline {{v\_1}} } \right|}}\) Simple vector algebra relates v1, v2 and vs as on the diagram below: ![](figures/shear_vectors.png) If in a deformation process there are n such shear planes of the type SS', then \(pa = k\sum\limits\_n {\left| {SS\_n^`} \right|\left| {{v\_{sn}}} \right|} \) setting \(\left| {\underline {{v\_1}} } \right| = 1.0\), i.e. unit velocity. ![](../images/divider400.jpg) ### Rules for constructing a hodograph The animation below illustrates the seven rules for constructing a hodograph, for the case of a constrained punch. An analysis of the geometry of the hodograph enables an upper bound for the applied force to be calculated. Let Oq be a velocity vector of unit magnitude in the hodograph, i.e. νOq = 1 Due to the dead metal zone, Q and Q' move at the same velocity. O is a stationary component of the system, anywhere in the surrounding perfectly rigid metal which has not yielded at all. Oq and Oq' are in essence vectors defining the motion of particles in region Q'. Or is velocity of a particle in region R. q'r is a vector defining the shear velocity parallel to Q'R. Os is velocity of a particle in region S. rs is a vector defining the shear velocity parallel to RS. Hence, \[{v\_{Or}} = \frac{1}{{\tan \theta }}{\rm{ ,\; }}{v\_{q'r}} = \frac{1}{{\sin \theta }}{\rm{\; and \; }}{v\_{Os}} = {v\_{rs}} = \frac{1}{{2\sin \theta }}\] ![](eqn/eqn_hodograph2/eq0007M.gif) Using the we have: \[p\left( {\frac{b}{2}} \right) = k\left\{ {Q'R{v\_{q'r}} + OR{v\_{Or}} + RS{v\_{rs}} + OS{v\_{Os}}} \right\}\] where Q'R is the length of the line dividing regions Q' and R, OR is the length of the line dividing regions O and R, RS is the length of the line dividing regions R and S and OS is the length of the line dividing regions O and S \[p\left( {\frac{b}{2}} \right) = kb\left\{ {\frac{1}{{2\cos \theta }}.\frac{1}{{\sin \theta }} + 1.\frac{1}{{\tan \theta }} + \frac{1}{{2\cos \theta }}.\frac{1}{{2\sin \theta }} + \frac{1}{{2\cos \theta }}.\frac{1}{{2\sin \theta }}} \right\}\] \[ = kb\left\{ {\frac{1}{{\sin \theta \cos \theta }} + \frac{{\cos \theta }}{{\sin \theta }}} \right\} = kb\left\{ {\frac{{1 + {{\cos }^2}\theta }}{{\cos \theta \sin \theta }}} \right\}\] \[ \Rightarrow \frac{p}{{2k}} = \frac{{1 + {{\cos }^2}\theta }}{{\sin \theta \cos \theta }} = f\left( \theta \right)\] \(\frac{{df}}{{d\theta }}\) = 0 and f is then a minimum when \[\cos 2\theta = - \frac{1}{3}{\rm{ }} \Rightarrow {\rm{ }}\theta = {54.74^ \circ }{\rm{ when }}\sin \theta = \frac{{\sqrt 2 }}{{\sqrt 3 }}{\rm{ and }}\cos \theta = \frac{1}{{\sqrt 3 }}\] ![](eqn/eqn_hodograph2/eq0007M.gif) minimum \(\frac{p}{{2k}}\)\( = 2\sqrt 2 = 2.83\) from this upper bound analysis.   ![](../images/divider400.jpg) When indenting using a sliding (frictionless) punch, we can postulate a different deformation pattern without the dead metal zone. The system also has a plane of symmetry and a hodograph can be constructed as follows: As before, Oq = 1.0 Material in R travels in direction shown with velocity \(Or = \) \(\frac{1}{{\sin \theta }}\) \(\left| {Or} \right| = \left| {rs} \right| = \) \(\frac{1}{{\sin \theta }}\)\( = \left| {st} \right| = \left| {Ot} \right|\) \(\left| {Os} \right| = \) \(\frac{{2\cos \theta }}{{\sin \theta }}\) \(\left| {qr} \right| = \) \(\frac{1}{{\tan \theta }}\) = \(\frac{{\cos \theta }}{{\sin \theta }}\) Lengths in drawing of indent: \(QR = SO =\) \(\frac{b}{2}\) \(OR = RS = ST = OT = \) \(\frac{b}{{4\cos \theta }}\) We therefore have: \(\frac{{pb}}{2}\) = \(k\left\{ {OR{v\_{Or}} + RS{v\_{rs}} + OS{v\_{Os}} + ST{v\_{st}} + TO{v\_{tO}}} \right\}\) \( = kb\) \(\left\{ {\frac{1}{{4\cos \theta }}.\frac{1}{{\sin \theta }} + \frac{1}{{4\cos \theta }}.\frac{1}{{\sin \theta }} + \frac{1}{2}.\frac{{2\cos \theta }}{{\sin \theta }} + \frac{1}{{4\cos \theta }}.\frac{1}{{\sin \theta }} + \frac{1}{{4\cos \theta }}.\frac{1}{{\sin \theta }}} \right\}\) \( = kb \) \(\left\{ {\frac{1}{{\sin \theta \cos \theta }} + \frac{{\cos \theta }}{{\sin \theta }}} \right\}\) \( \Rightarrow \) \(\frac{P}{{2k}}\) = \(\left\{ {\frac{{1 + {{\cos }^2}\theta }}{{\sin \theta \cos \theta }}} \right\}\) as before for the case of the constrained punch. This analysis has assumed that no friction occurs at the punch face to cause particles in R to move parallel to OR. If there is friction, we can take it to be sticking friction, so that there is a shear stress *k* acting and slippage velocity \( = {v\_{qr}}\) \( \Rightarrow \) in this case \(\frac{P}{{2k}}\) = \(\left\{ {\frac{{2 + 3{{\cos }^2}\theta }}{{2\sin \theta \cos \theta }}} \right\}\) \( \Rightarrow \) of the three possible upper bound solutions, the 'best' answer is \(\frac{P}{{2k}}\) = \(2\sqrt 2 = 2.83\) This is a 10% overestimate of the true value of \(\frac{P}{{2k}}\) found from . • Hodographs II = is an important working process. A simple form of extrusion used for non-ferrous metals involves a smooth square die. We define extrusion ratio, R = ratio of areas. \(R = \) \(\frac{{{A\_0}}}{{{A\_1}}}\) = \(\frac{H}{h}\) for plane strain (R > 1), e.g. R = 4 = 75% reduction in area. For a square die with sliding on the die face in , the hodograph can be constructed as follows: ![](../images/divider400.jpg) Click for a full mathematical analysis of this hodograph. ![](../images/divider400.jpg) An alternative approach to an extrusion hodograph assumes there is a 'dead metal' zone. Then \(p\frac{H}{2}\) = \(k\left\{ {PQ{v\_{PQ}} + DQ{v\_{dq}} + QR{v\_{qr}}} \right\}\) After similar algebra to the , we obtain \[\frac{p}{{2k}} = \frac{1}{{2\left( {\sin \varphi - \cos \varphi } \right)}}\left\{ {\frac{{R + 1}}{{\sin \varphi }} - 2\left( {R - 1)\cos \varphi } \right)} \right\}\] Minimising RHS, \(\cot \varphi = 1 - \) \(\frac{2}{{\sqrt {R + 1} }}\) After more algebra, it is found that \[\frac{{{p\_{\min }}}}{{2k}} = 2\left( {\sqrt {R + 1} - 1} \right)\] Note that for low R (< 4) this value is less than that for , even if the die face is frictionless. \( \Rightarrow \) For R < 4 this is a better upper bound solution for extrusion problems.![](../images/divider400.jpg)  Click for a full mathematical analysis of this hodograph. ![](../images/divider400.jpg) Finite Element Method =<! .style1 {color: #0000FF} .style2 {color: #000000} >Finite element analysis (FEA) is an increasingly powerful computational technique in which the geometry of even a very complex body is represented by a mesh comprising a large number of discrete regions called finite elements. The elements are linked to each other at discrete points called nodes and different geometry elements (with varying numbers of nodes) can be selected for different types of problems. The use of FEA extends well beyond simple deformation since the approach can handle static and dynamic loading (with time-dependent changes in loads), two or three dimensional objects and different types of loading, e.g. thermal, electromagnetic as well as mechanical forces. Many commercial packages are now available. The animation below depicts a finite-element simulation for the production of gudgeon pins. These are pins which hold a piston rod and a connecting rod together. The process consists of 4 stages []: 1. Upsetting 2. Indentation 3. Backward Extrusion 4. Punching – The base of the cup is punched out producing the gudgeon pin. The simulation accounts for steps 1 to 3. *Note: This animation requires Adobe Flash Player 8 and later, which can be .* Summary = This module has presented different approaches to model plastic flow in metal forming operations, showing their relative importance and limitations.  The approaches covered are:- * Slip line field theory * Work formula analysis * Limit analysis and hodographs * Finite element analysis The backgrounds to the approaches have been summarised, and then examples presented to show how each can be used in practice to provide estimates of the necessary deformation loads for the various metal forming operations. Questions = ### Deeper questions*The following questions require some thought and reaching the answer may require you to think beyond the contents of this TLP.*1. A specimen of sheet steel is tested in unequal biaxial tension, and Lüders bands form at 60° to one of the tensile axes. Show that the ratio between the two principal stresses in the plane of the sheet is 1:5. If the greater of these two principal stresses is 500 MPa and the steel obeys von Mises' yield criterion, show that the yield stress in uniaxial tension is 458 MPa. 2. Use a work formula approach to estimate the minimum pressure required to extrude aluminium curtain rail of I-section, 12 mm high with 6 mm wide flanges, all 1.6 mm thick, from 25 mm diameter bar stock.[The mean uniaxial yield stress for aluminium for heavy deformation at room temperature is 150 MPa. The minimum pressure, ![](eqn/eqn_question2/eq0001M.gif), required is given by the formula![](eqn/eqn_question2/eq0002M.gif)where ![](eqn/eqn_question2/eq0003M.gif) is the original cross-sectional area and ![](eqn/eqn_question2/eq0004M.gif) is the cross-sectional area of the extruded I-section]. 3. The diagram below shows a possible deformation pattern for the direct plane-strain extrusion of a metal slab, initially 40 mm thick, through a symmetrical 45° tapering die, with an extrusion ratio of 2.The diagram shows half the deformation pattern and the angles BCD and CBD are both 45°. The distance AB is 15 mm. The width of the slab is 100 mm and its yield stress in pure shear is 150 MPa.![](figures/question_extrusion1_sml.png)Calculate an upper bound to the extrusion force F acting on the slab if the extrusion process is frictionless. Going further = ### Books: [1] W. Johnson and P.B. Mellor, *Engineering Plasticity*, Van Nostrand Reinhold Comapny Ltd., 1978. [2] G.W. Rowe, C.E. Sturgess, P. Hartley and I. Pillinger, *Finite-Element Plasticity and Metalforming Analysis*, Cambridge University Press, 1991.  
Aims On completion of this TLP you should: 1. be able to interpret metallographic micrographs, recognising some common features of microstructure. 2. be able to suggest a solidification or cooling route on a phase diagram for a given microstructure. Before you start It is useful to read the following TLPs: * – explains how to mount samples and how to go about optical microscopy * – this TLP looks at how to construct phase diagrams and includes an introduction to their use. Introduction The microstructure of a material has a large effect on its properties. In order to understand the properties of a material it is necessary to consider many things including the chemical compositions, proportions, shapes and types of phase present as well as the defect distributions and crystal structure. For a given micrograph and associated phase diagram it is often possible to ‘plot a cooling route, suggesting which phases would precipitate out first and connecting this to what is seen in the micrograph. Steels Steel has very widely ranging applications, and a number of different commonly used forms. Many of these forms can be seen on the Fe-Fe3C phase diagram showing the equilibrium between iron and the cementite (Fe3C) phase. Many steels are described by the carbon content, they have less than 1.5% carbon, but most contain other additions. The interactive phase diagram below shows different regions with varying carbon concentrations, hover over them for more information. The phase diagram above shows a very important example of a solid-solid transformation: the eutectoid transformation. The type of microstucture that forms is dependant on the cooling rate, due to diffusion limitations in the lattice. One of these microstructures is the eutectoid lamellae structure. As a eutectoid steel cools below the eutectoid temperature pearlite lamellae form. Due to kinetic limitations carbon rejected from the ferrite phase cannot diffuse far away from the boundaries. This results in cooperative growth of the lower and higher carbon phases: ferrite and cementite respectively. Increasing the cooling rate, taking the alloy further from thermodynamic equilibrium can result in shear transformations rather than diffusive. The transformation to martensite is an example of this. ![](images/mic_martensite45.jpg) Note the lenticular deformation twins that minimise strain energy. See the for more information. Additions such as Cr, Si, Ni or Mn stabilise different phases. The diagrams below show how increasing concentrations of additions can change the stable phase at low and high temperatures. This is how we can have austenitic steel at room temperature, where ferrite and cementite are more stable and a diffusionless martensitic transformation would take place if the system (without additions) was quenched. ![](images/alphagammastabilisers.gif) These show temperature vs concentration of stabilisers. The diagram on the left shows ferrite is stable at low and high temperatures for high concentrations of stabilisers. The one on the right shows that austenite can be stable at lower temperatures with C, Ni or Mn additions. Cast Irons The steel phase diagram above is not actually the equilibrium phase diagram for the iron-carbon system, but due to kinetics Fe3C usually forms. For ferrous alloys with higher carbon contents graphite often (although not always) forms. These high C content alloys are referred to as cast irons, the three common types of which are ‘grey, ‘spheroidal and ‘white. Cast irons usually contain some Si or other alloying additions, which often stabilise the graphite phase, such that it precipitates out even when the wt% present is less than 4.3% (the eutectic composition for the Fe-C phase diagram). ### Grey Cast Irons: Usually contain more C or Si than white cast irons, and require a lower cooling rate.   They are called ‘grey cast irons not because of their colour, but due to the appearance of a fractured surface.  Grey cast irons are quite ductile and have unreflective fracture surfaces. Steps on cooling: 1. When the alloy falls below the liquidus, graphite begins to precipitate out. For a simple Fe-C system this means the composition must be hypereutectic, but the addition of Si moves the eutectic composition by stabilising the graphite phase. The graphite precipitates are flake-like with growth occurring in preferred crystallographic directions 2. At the eutectic temperature a cementite and γ (austenite) eutectic forms from the remaining liquid phase this is known as ledeburite. 3. As the temperature continues to decrease carbon diffuses out of solid solution to the graphite precipitates. 4. When the eutectoid temperature is reached, the remaining austenite transforms to pearlite (lamellar cementite (Fe3C) and ferrite (iron with some carbon in solid solution). Some alloying additions may modify this final transformation, for example if enough Ni is present the austenite will not transform to pearlite. The final microstructure shows graphite flakes in a matrix of transformed ledeburite, see in the where many more examples may be found. ### Spheroidal Cast Irons: These are similar to grey cast irons, but they contain ‘inoculants – alloying additions that change the form of the graphite precipitates. These inoculants are usually Mg or Ce (~0.1wt%) and they cause the graphite to grow in spheres rather than flakes. There are two theories offering an explanation for this. The first describes the Mg or Ce impurities “poisoning” the graphite growth sites, attaching to them and slowing growth in that direction. The second suggests an increase in interfacial energy – the surface energy between the melt and the graphite, such that surface area per volume is minimised. The cooling steps follow the same route as the grey cast irons with the graphite precipitates growing in spherical shapes. ### White Cast Irons: These contain less Si or C than grey cast irons and undergo faster cooling. This results in cementite forming in favour of graphite. Again the name ‘white has little to do with the ordinary appearance of the alloy, but rather refers to the fracture surface. White cast irons are much more brittle than grey cast irons, and so their fracture surfaces are reflective, leading to their classification as ‘white. The cooling route depends on the composition of the melt, whether it is hyper – or hypo – eutectic (eutectic composition is at 4.3wt.%C). A hypereutectic composition leads to the cementite precipitating out first; a hypoeutectic composition leads to γ – austenite precipitating out first. Note – “hypereutectic” has a higher carbon content than the eutectic composition.             “hypoeutectic” has a lower carbon content than the eutectic composition. The first phase to precipitate out forms dendrites due to non-equilibrium effects; the cooling melt does not always follow the predicted composition on the phase diagram (see the page on the in the and the page in the ). When the eutectic is crossed the remaining melt solidifies as an austenite, cementite eutectic (ledeburite).  The carbon continues to be ejected from the austenite as the alloy cools, diffusing to the cementite.   At the eutectoid temperature the final transformation takes place from austenite to pearlite.  In some very quickly cooled white cast irons the austenite may transform to martensite. Copper based alloys = **Brasses:** Brasses are copper alloys with zinc, see the . ### Alpha brass: From the copper-zinc phase diagram we can see the solid solubility of zinc in copper, for concentrations of zinc upto about 30 at.%, at equilibrium the alloy should be of a single phase. Alpha brasses are often seen with a single phase, however this usually arises due to annealing. As the alloy cools α phase copper precipitates out first, changing the composition of the remaining melt. This may result in coring and dendritic growth as well as the formation of other phases such as the β phase when the zinc concentration in the remaining liquid is sufficiently high. Annealing the sample to aid diffusion means the composition becomes more uniform as zinc diffuses down the concentration gradient and a single phase predominates. ### Alpha-beta brass: Another very common form of brass is α-β brass. α-β brasses have zinc concentrations of between about 30at.% and 45at.% and are two phase alloys. The α phase precipitates out first and may form a Widmanstatten structure (see micrograph below), solidifying in plates along preferred growth directions. ### Copper-Tin Alloys: Cu-Sn alloys are sometimes called bronzes, although this includes other kinds of copper alloys (e.g. with silicon and aluminium).  ![Cu-Sn phase diagram](images/cusnphaseperi.gif) Copper tin phase diagram showing a peritectic point The peritectic reaction (see diagram above) is an important example of a microstructural transformation.  Sn – 21wt.%Cu exhibits this transformation from a solid phase and a liquid phase to a different, solid phase. Before the transformation begins the system is comprised of the ε phase and liquid. Below 415°C the equilibrium solid phase is η The peritectic transformation begins to take place at 415°C; the new phase precipitates heterogeneously on the surface of ε precipitates. The growing layer of η on the surface of the epsilon precipitates prevents the copper diffusing out to remove inhomogeneities, so some of the copper is trapped within the ε precipitates and the liquid has a lower Cu concentration than the bulk composition. This means the peritectic reaction never goes to completion (i.e. all liquid and solid going to the second solid). In this example the liquid continues to cool until it reaches the eutectic temperature, 227°C, when it transforms. Summary = By considering thermodynamics (phase diagrams) and some kinetics it is possible to gain an understanding of how a microstructure comes about. Phase diagrams are very powerful tools in interpreting micrographs, but there also exist many microstructures that can only be explained by considering things like diffusion limitation. Questions = ### Quick questions*You should be able to answer these questions without too much difficulty after studying this TLP. If not, then you should go through it again!*1. Where α and β are solid phases, which of the following describes a eutectic transformation? | | | | | - | - | - | | | a | \(\alpha + {\rm{liquid}} \to \beta \) | | | b | \(\alpha \to \beta \) | | | c | \({\rm{liquid}} \to \beta + \alpha \) | | | d | \({\rm{liquid}} \to \beta + {\rm{liquid}}\) | 2. Where α and β are solid phases, which of the following describes a peritectic transformation? | | | | | - | - | - | | | a | \(\alpha + {\rm{liquid}} \to \beta \) | | | b | \(\alpha \to \beta \) | | | c | \({\rm{liquid}} \to \beta + \alpha \) | | | d | \({\rm{liquid}} \to \beta + {\rm{liquid}}\) | 3. Which of the following alloying additions stabilises the austenite phase of steel? (You may select more than one) 1. Carbon - C 2. Nickel - Ni 3. Cerium - Ce 4. Silicon - Si 4. What differences in appearance might you expect between annealing and deformation twins? | | | | | - | - | - | | | a | Annealing twins are smaller than deformation twins | | | b | Deformation twins are smaller than annealing twins | | | c | Deformation twins have flat sides, annealing twins are lens-shaped | | | d | Annealing twins have flat sides, deformation twins are lens-shaped | | | e | There are no differences | 5. Why is spheroidal cast iron tougher than grey cast iron? Going further = ### Books Porter, D. A. and Easterling K, *Phase Transformations in Metals and Alloys*, 2nd edition, Routledge, 1992. G.A. Chadwick, *Metallography of phase transformations*, Butterworth & Co (Publishers) Ltd, 1972.  
Aims On completion of this TLP you should: * Understand the concept of a lattice plane; * Be able to determine the Miller indices of a plane from its intercepts with the edges of the unit cell; * Be able to visualise and draw a plane when given its Miller indices; * Be aware of how knowledge of lattice planes and their Miller indices can help to understand other concepts in materials science. Before you start You should understand the concepts of a lattice, unit cell, crystal axes, vrystal system and the variations, primitive, FCC, BCC which make up the Bravais lattice. You might also like to look at the TLP on . You should understand the concepts of vectors and planes in mathematics. Introduction Miller Indices are a method of describing the orientation of a plane or set of planes within a lattice in relation to the unit cell. They were developed by . These indices are useful in understanding many phenomena in materials science, such as explaining the shapes of single crystals, the form of some materials' microstructure, the interpretation of X-ray diffraction patterns, and the movement of a dislocation, which may determine the mechanical properties of the material. Parallel lattice planes = <! .style1 {color: #FF0000} > This animation explains the relationships between parallel planes and their indices. Click "Start" to begin and use the buttons at the bottom right to navigate through the pages. Lattice planes can be represented by showing the trace of the planes on the faces of one or more unit cells. The diagram shows the trace of the (213) planes on a cubic unit cell. ![Diagram showing trace of the (2bar13) planes on a cubic unit cell](images/trace.jpg) How to draw a lattice plane = Bracket Conventions = In crystallography there are conventions as to how the indices of planes and directions are written. When referring to a specific plane, "round" brackets are used: (*hkl*) When referring to a set of planes related by symmetry, then "curly" brackets are used: {*hkl*} These might be the (100) type planes in a cubic system, which are (100), (010), (001), (100),(010) and (001) . These planes all "look" the same and are related to each other by the symmetry elements present in a cube, hence their different indices depend only on the way the unit cell axes are defined. That is why it useful to consider the equivalent (010) set of planes. Directions in the crystal can be labelled in a similar way. These are effectively vectors written in terms of multiples of the lattice vectors **a**, **b**, and **c**. They are written with "square" brackets: [*UVW*] A number of crystallographic directions can also be symmetrically equivalent, in which case a set of directions are written with "triangular" brackets: <*UVW*> Vectors and Planes It may seem, after considering cubic systems, that any lattice plane (*hkl*) has a normal direction [*hkl*]. This is not always the case, as directions in a crystal are written in terms of the lattice vectors, which are not necessarily orthogonal, or of the same magnitude. A simple example is the case of in the (100) plane of a hexagonal system, where the direction [100] is actually at 120° (or 60° ) to the plane. The normal to the (100) plane in this case is [210] Your browser does not support the video tag. VR rotating imageWeiss Zone Law The Weiss zone law states that: If the direction [*UVW*] lies in the plane (*hkl*), then: *hU* + *kV* + *lW* = 0 In a cubic system this is exactly analogous to taking the scalar product of the direction and the plane normal, so that if they are perpendicular, the angle between them, θ, is 90° , then cosθ = 0, and the direction lies in the plane. Indeed, in a **cubic** system, the scalar product can be used to determine the angle between a direction and a plane. However, the Weiss zone law is more general, and can be shown to work for all crystal systems, to determine if a direction lies in a plane. From the Weiss zone law the following rule can be derived: The direction, [*UVW*], of the intersection of (*h*1*k*1*l*1) and (*h*2*k*2*l*2) is given by: *U* = *k*1*l*2 − *k*2*l*1 *V* = *l*1*h*2 − *l*2*h*1 *W* = *h*1*k*2 − *h*2*k*1 As it is derived from the Weiss zone law, this relation applies to all crystal systems, including those that are not orthogonal. Examples of lattice planes <! .style1 {color: #FF0000} >The (100), (010), (001), (100), (010) and (001) planes form the faces of the unit cell. Here, they are shown as the faces of a triclinic (a ≠ b ≠ c, α ≠ β ≠ γ) unit cell . Although in this image, the (100) and (100) planes are shown as the front and back of the unit cell, both indices refer to the same family of planes, as explained in the animation . It should be noted that these six planes are not all symmetrically related, as they are in the cubic system. ![Diagrams showing the planes forming the faces of the unit cell](images/faces.jpg) The (101), (110), (011), (101), (110) and (011) planes form the sections through the diagonals of the unit cell, along with those planes whose indices are the negative of these. In the image the planes are shown in a different triclinic unit cell. ![Diagrams showing the planes forming the diagonals of the unit cell](images/110.jpg)   The (111) type planes in a face centred cubic lattice are the close packed planes. Click and drag on the image below to see how a close packed (111) plane intersects the fcc unit cell. Your browser does not support the video tag. VR rotating image Draw your own lattice planes <! .style1 {color: #FF0000} > This simulation generates images of lattice planes. To see a plane, enter a set of Miller indices (each index between 6 and −6), the numbers separated by a semi-colon, then click "view" or press enter. Practical Uses An understanding of lattice planes is required to explain the form of many microstructural features of many materials. The faces of single crystals form on certain lattice planes, typically those with low indices. In a similar way, the form of the microstructure in a polycrystalline material is strongly dependent on lattice planes. When a new phase of material forms, the surfaces tend to be aligned on low index planes, as with single crystals. When a new solid phase is formed in another solid, the interfaces occur on along the most energetically favourable planes, where the two lattices are most coherent. This leads to plate-like precipitates forming, at specific angles to each other. ![Photograph of a section through an Fe-Ni meteorite showing plates at 60° to each other](images/Widmanstatten_patterns.jpg) Section through an Fe-Ni meteorite showing plates at 60° to each other One method of plastic deformation is by dislocation slip. Understanding lattice planes, and directions is essential to explain why dislocations move, combine and tangle in the observed way. More information can be obtained in the TLP - '' ![A scanning electron micrograph of a single crystal of cadmium ](images/cadmium%20slip.jpg) A scanning electron micrograph of a single crystal of cadmium deforming by dislocation slip on 100 planes, forming steps on the surface Twinning is where a part of the crystal is "flipped" to form a mirror image of the rest of the crystal, reflected in a particular lattice plane. This can either occur in annealing, or as a mechanism of plastic deformation. ![Micrograph of Annealing twins in brass](images/twinning.jpg) Annealing twins in brass () X-ray diffraction is a method of determining the crystal structure of a material. By interpreting the diffraction patterns as reflections from lattice planes in the material, the structure can be determined. More information can be obtained in the TLP - '' ![X-ray Diffractometer](images/xray.jpg) Apparatus for carrying out single crystal X-ray diffraction. Worked examples = Example A = The figure below is a scanning electron micrograph of a niobium carbide dendrite in a Fe-34wt%Cr-5wt%Nb-4.5wt%C alloy. Niobium carbide has a face centred cubic lattice. The specimen has been deep-etched to remove the surrounding matrix chemically and reveal the dendrite. The dendrite has 3 sets of "arms" which are orthogonal to one another (one set pointing out of the plane of the image, the other two sets, to a good approximation, lying in the plane of the image), and each arm has a pyramidal shape at its end. It is known that the crystallographic directions along the dendrite arms correspond to the < 100 > lattice directions, and that the direction **ab** labelled on the micrograph is [101]. ![Scanning electron micrograph of a niobium carbide dendrite in a Fe-34wt%Cr-5wt%Nb-4.5wt%C alloy](images/dendrite.jpg) sourced from 1) If point **c** (not shown) lies on the axis of this dendrite arm, what is the direction **cb** ? Index face C , marked on the micrograph. ![](images/dendrite_directions_eg.jpg) The diagram shows the [101] direction in red. The [100] direction is a < 100 > type direction that forms the observed acute angle with **ab**, and can be used as **cb**. Of the < 100 > type directions, we could also have used [001]. Using a right handed set of axes, we then have z-axis pointing out of the plane of the image, the x-axis pointing along the direction **cb**, and the y-axis pointing towards the top left of the image. ![Scanning electron micrograph of a niobium carbide dendrite in a Fe-34wt%Cr-5wt%Nb-4.5wt%C alloy](images/dendrite_axes.jpg) Face C must contain the direction **cb**, and its normal must point out of the plane of the image. Therefore face C is a (001) plane. 2) The four faces which lie at the end of each dendrite arm have normals which all make the same angle with the direction of the arm. Observing that faces A and B marked on the micrograph both contain the direction **ab** , and noting the general directions along which the normals to these faces point, index faces A and B . Both faces A and B have normals pointing in the positive x and z directions, i.e. positive h and l indices. Face A has a positive k index, and face B has a negative k index. The morphology of the ends of the arms is that of half an octahedron, suggesting that the faces are (111) type planes. This would make face A, in green, a (111) plane, and face B, in blue, a (111) plane. As required, they both contain the [101] direction, in red. ![Diagram showing dendrite faces](images/eg1.jpg) Example B = 1) Work out the common direction between the (111) and (001) in a triclinic unit cell. The relation derived from the Weiss zone law in the section states that: The direction, [*UVW*], of the intersection of (*h*1*k*1*l*1) and (*h*2*k*2*l*2) is given by: *U* = *k*1*l*2 − *k*2*l*1 *V* = *l*1*h*2 − *l*2*h*1 *W* = *h*1*k*2 − *h*2*k*1 We can use this relation as it applies to all crystal systems, including the triclinic system that we are considering. We have *h*1 = 1, *k*1 = 1, *l*1 = 1 and *h*2 = 0, *k*2 = 0, *l*2 = 1 Therefore *U* = (1 × 1) - (0 × 1) = 1 *V* = (1 × 0) - (1 × 1) = −1 *W* = (1 × 0) - (0 × 1) = 0. So the common direction is: [110]. This is shown in the image below: ![Diagram showing common direction](images/WZeg.jpg) If we had defined the (001) plane as (*h*1*k*1*l*1) and the (110) plane as (*h*2*k*2*l*2) then the resulting direction would have been, [110] i.e. anti-parallel to [110]. 2) Use the Weiss zone law to show that the direction [110] lies in the (111) plane. We have *U =*1, *V =*−1, *W =*0, and *h* = 1, *k* = 1, *l* = 1. *hU* + *kV* + *lW* = (1 × 1) + (1 × −1) + (1 × 0) = 0 Therefore the direction [110] lies in the plane (111). Summary = Miller Indices are the convention used to label lattice planes. This mathematical description allows us to define accurately, planes within a crystal, and quantitatively analyse many problems in materials science. Questions = ### Game: Identify the planes ### Quick questions*You should be able to answer these questions without too much difficulty after studying this TLP. If not, then you should go through it again!*1. Which one of the following statements about the (241) and (241) planes is false? | | | | | - | - | - | | | a | They are perpendicular. | | | b | They are part of the same set of planes. | | | c | They are part of the same family of planes. | | | d | They are parallel. | 2. Does the [122] direction lie in the (301) plane? | | | | | - | - | - | | | a | Yes | | | b | No | 3. When writing the index for a set of symmetrically related planes, which type of brackets should be used? | | | | | - | - | - | | | a | (Round) | | | b | {Curly} | | | c | <Triangular> | | | d | [Square] | 4. Which of the <110> type directions lie in the (112) plane? | | | | | - | - | - | | | a | [110] and [110] | | | b | [101] and [101] | | | c | [011] and [101] | | | d | [110] and [110] | 5. What is the common direction between the (132) and (133) planes? | | | | | - | - | - | | | a | [310] | | | b | [310] | | | c | [410] | | | d | [410] | 6. Which set of planes in a cubic-close-packed structure (such as copper) is close packed? | | | | | - | - | - | | | a | {110} | | | b | {100} | | | c | {111} | | | d | {222} |### Open-ended questions*The following questions are not provided with answers, but intended to provide food for thought and points for further discussion with other students and teachers.*7. Practice sketching some lattice planes. Make sure you can draw the {100}, {110} and {111} type planes in a cubic system. 8. Draw the trace of all the (121) planes intersecting a block 2 × 2 × 2 block of orthorhombic (a ≠ b ≠ c, α = β = γ = 90°) unit cells. 9. Sketch the arrangement of the lattice points on a {111} type plane in a face centred cubic lattice. Do the same for a {110} type plane in a body centred cubic lattice. Compare your drawings. Why do you think the {110} type planes are often described as the "most close packed" planes in bcc? Going further = ### Books [1] D. McKie and C. McKie, *Crystalline Solids* , Thomas Nelson and Sons, 1974. A very comprehensive crystallography text. [2] C. Hammond, *The Basics of Crystallography and Diffraction* , Oxford, 2001. Chapter 5 covers lattice planes and directions. The rest of the book gives an introduction to crystallography and diffraction in general. [3] B.D. Cullity, *Elements of X-Ray Diffraction* , Prentice Hall, 2003. Covers X-Ray diffraction in detail. Chapter 2 covers the crystallography required for this. [4] C. Kittel, *Introduction to Solid State Physics*, John Wiley and Sons, 2004. Chapter 1 covers crystallography. The book then goes on to cover a wide range of more advanced solid state science.  
Aims On completion of this TLP you should be able to: * understand the basic physics behind nuclear fission; * describe the common features of nuclear reactors; * understand the various *neutron cross-sections*; * explain the mechanisms of radiation damage, and its consequences, particularly for structural steels; * understand the material problems associated with extreme conditions, in particular large radiation fluxes; * explain the materials selection for the components at the heart of a nuclear reactor: + moderators; + control rods; + cladding. Before you start Readers should be familiar with the concept of a , , and diffusion. A familiarity with the basics of mechanical behaviour and corrosion of materials would also be useful. Readers should be familiar with standard nuclear terminology:  the definitions of isotope and nuclide, the composition of nuclei, the definitions of atomic number and mass number.A note on units:  throughout this TLP the unit used for energy is the , the energy associated with one electronic charge (1.619 × 10−19 C) subjected to a potential difference of 1 V, i.e. 1 eV ≈ 1.619 × 10−19 J Introduction to Nuclear Processes = Each nucleus, consisting of *protons* and *neutrons* (collectively known as *nucleons*), has an associated *binding energy*. A graph of binding energy per nucleon is shown in the graph below. The total binding energy of a nucleus is the energy released when a nucleus is assembled from individual nucleons; the greater the energy release, the lower the potential energy of the nucleus, so higher binding energy in the graph represents greater stability. When one nucleus is converted to another or others of higher binding energy, whether that be through a natural radioactive process or through an artificially induced process, the difference in the total binding energies of the nuclei is released as kinetic energy of the particles produced and gamma rays. This energy can be harnessed through traditional methods, e.g. by heating water to generate steam to drive a turbine, and so electricity can be produced. Origins of Binding Energy - The measured binding energies of the nuclides can be fitted reasonably well by Weizsäckers formula (see below). The formula is derived by treating the nucleus as analogous to a liquid drop, with surface energy and volume energy terms leading to the two dominant contributions:  a term proportional to *A*, the atomic mass and to the volume of the nucleus, and a term proportional to -*A*2/3 due to the surface energy. These two terms compete, much in the same way they do in other processes (e.g. nucleation), facilitating a qualitative understanding of why nuclei split up or join together under certain conditions.   ![A graph of the binding energy per nucleon, in MeV, for common nuclides](images/binding-energy-per-nucleon-graph.png) A graph of the binding energy per nucleon, in MeV, for common nuclides. Fusion Energy is given off when a nucleus becomes more stable, i.e. approaches the maximum on the graph above. Moving from lighter nuclei towards this maximum requires two nuclei to combine and form a heavier one (*fusion*), whereas moving from heavier nuclei towards this maximum requires the nucleus to split apart (*fission*). The energy release per mass of nuclide is much higher for fusion than for fission.  Fusion has many other attractive attributes as a basis for power generation, but since nuclei are positively charged, sufficient energy most be put into the system to overcome the repulsion between nuclei so that a fusion process can occur. This *Coulomb barrier* can also be expressed as an *ignition temperature*. The technical challenges are many, and nothing close to a commercially viable reactor currently exists. Fusion for power generation is still a prominent research topic, and experimental reactors are in the process of being built, such as ITER (International Thermonuclear Experimental Reactor), which is planned to be completed by 2018.   Since nuclear fusion is not yet a practical power source, this TLP will instead focus on nuclear fission as means to generate heat and electricity. Fission - Nuclear fission, as previously mentioned, involves splitting a heavier nucleus into two lighter nuclei. Fission can be induced if a nucleus absorbs a neutron of sufficient energy. If a nucleus undergoes fission regardless of the incident neutron energy, the nucleus is referred to as *fissile*; otherwise, if there is a threshold energy then the nucleus is referred to as *fissionable*.Examples of fissile nuclides include  233U, 235U and  239Pu. The nuclide most commonly used in nuclear reactors is 235U. A neutron will not necessarily induce fission if it passes through the nucleus. For example, fast neutrons are less likely to induce fission in 235U than thermal neutrons (i.e. neutrons with kinetic energy of the order of *kT*). Qualitatively, this makes sense since the faster a neutron is travelling the less time it spends inside the nucleus and so the less opportunity it has to induce fission within the nucleus. The actual reasons for this are complicated, and this topic is explored further on the “” page. Fissionable nuclides, such as  238U and  239Pu, are also used in so-called “fast” reactors, where the neutrons are travelling fast enough (commonly around 10% the speed of light, or 1 MeV) to overcome the activation energy required to make fissionable nuclides decay. The movie below illustrates the fission process: Your browser does not support the video tag. Video illustrating nuclear fissionAs can be seen in the movie, the parent nucleus decays into two fission fragments of unequal mass with a combined kinetic energy of about 169 MeV and several neutrons with a kinetic energy of about 2 MeV each (for 235U, the average number of neutrons produced is 2.4, but can be as high as 5). These neutrons are highly energetic, with 7–8 orders of magnitude more energy than thermalized neutrons. A gamma ray of about 7 MeV is also released. The neutrons could induce further fission events in other nuclei and thus cause a chain reaction, but in practice they are too fast and must first be slowed down inside the reactor. ![This graph shows that fragments formed tend to be of unequal masses, with each fragment being Gaussian distributed about a particular lower/higher mass respectively. Graph is under a CC[BY][NC][SA] license graph and was created from source data at http://www-nds.iaea.org/sgnucdat/c1.htm](images/fission-products-graph.png "This graph shows that fragments formed tend to be of unequal masses, with each fragment being Gaussian distributed about a particular lower/higher mass respectively. Graph is under a CC[BY][NC][SA] license graph and was created from source data at http://www-nds.iaea.org/sgnucdat/c1.htm") Graph showing the distribution of fission fragment mass numbers for three nuclides, U-233, U-235 and Pu-239. The fragments formed tend to be of unequal masses, with each fragment showing a Gaussian distribution about a particular lower or higher mass. [Graph is under a CC[BY][NC][SA] licence and was created from source data at http://www-nds.iaea.org/sgnucdat/c1.htm] The nuclides produced by fission are usually of unequal mass, as shown in the graph above. The *x*-axis of the graph is by atomic mass, not atomic number. Many fission fragments are highly unstable, and decay by giving off beta radiation: this involves a neutron changing into a proton within the nucleus, leaving the overall number of nucleons (and hence the mass of the nucleus) the same. Introduction to Nuclear Power Generation There are two main types of nuclear reactor, characterized by the speed of the neutrons which induce fission: 1. *Thermal reactors*. These are the predominant kind, using slower neutrons to induce fission, the basic fissile nuclide being U-235. 2. *Fast breeder reactors*. In these less-common reactors, the fast neutrons are used directly to create (breed) fissile nuclides from fissionable nuclides; most commonly Pu-239 is bred from U-238.  Pu-239 is also used in nuclear weapons. There are many varieties of nuclear reactor, but all have the following common elements: **Fuel:** The material that undergoes fission. This neednt have the fissionable nuclides in the form of the element. The fuel is often in the form of a ceramic. **Cladding:** This encases the nuclear fuel, isolating it mechanically and chemically from its immediate environment. **Moderator:** Necessary in thermal reactors to slow down the neutrons produced by the fission process. Commonly, the moderator is in the form of a rod, but can be in liquid form or even be mixed with the fuel itself. **Control:** This can be used to absorb excess neutrons, or even shut down the reactor in an emergency. Most often, the control material is in the form of a rod. **Core:** The heart of the reactor, containing the fuel.  The fuel is encased in cladding, and core must also accommodate the coolant and allow for more moderating rods or control rods to be added. **Coolant:** The coolant removes heat from the reactor core into a heat exchanger. Note that the coolant itself is not cool, just that it removes heat from the core. **Reactor vessel:** This contains the reactor core and the coolant. It often also acts as a reflector, reducing the loss of neutrons to the outside environment. **Generator/turbine:** The heat generated by the reactor core generates steam, used to drive a turbine, which can generate electricity. The following simulation demonstrates these main components in use. The types of reactor are loosely grouped into generations describing the time period in which they were first used. Advances in technology have led to new designs. The current generation of reactors can be defined by the materials used for each of these components. They include Pressurised Water Reactors (PWR), the most common reactor type, Boiling Water Reactors (BWR), CANDU or Pressurised Heavy Water Reactor (PHWR). These all include water as a coolant in some form. There are also Gas Cooled Reactors (GCR) and Advanced Gas Cooled Reactors (AGR), which use CO2 as coolant. Finally, there are also Liquid Metal Fast Breeder Reactors (LMFBR), which are cooled by a liquid metal (sodium or lead). There are also many other forms of reactors used for research purposes.The next generation, commonly referred to as Generation IV, in some cases are just incremental improvements on these designs, but in other cases are radically different designs aimed at increasing efficiencies and reducing risk. The latter may demand materials which can sustain exposure to much more extreme environments. Cross-Sections To understand the rest of this TLP, it is vital to know about cross-sections. What is a Cross-Section? A *cross-section* quantifies the probability that a particle passing through a material will interact with the material. For example, a neutron absorption cross-section would quantify the probability that a neutron is absorbed as it travels through a material. The following equation is a definition of the nuclear cross section σ \[\sigma = \frac{C}{{N\delta .I}}\] For neutrons passing through a plate of thickness δ*x* (m), *C* is the number of events occurring per unit area (m−2), *N* is the number of nuclei per unit volume, or nuclear number density (m−3), and *I* is the number of neutrons passing through a unit area (m−2).  As the behaviour depends on neutron energy, the cross-section must be specified for neutrons of a given energy (i.e. *monoenergetic*). The Nδx term is often grouped together, since when multiplied by σ it is equal to *C* / *I*, a dimensionless quantity that is the probability of a neutron interacting, i.e. the ratio of the number of events occurring per unit area to the number of neutrons travelling through that same area. Types of Cross-Section Several different cross-sections will be mentioned in this TLP. Standard notation is used below, where (a,b) means an atomic interaction in which a is absorbed and b is emitted. Elastic scattering (n,n): the cross-section of a neutron undergoing elastic scattering by a nucleus The total kinetic energy of the neutron and the nucleus is the conserved. Any energy that the neutron loses is due to the nucleus recoiling after the neutron is scattered. Inelastic scattering (n,n'): a neutron is briefly absorbed by a nucleus, leaving it in an excited state. The nucleus can later return to its ground state, losing its excess energy as a gamma ray. Radiative capture (n,γ): a neutron is absorbed by a nucleus, which gives out a gamma ray as a result.   Fission (n,f): neutron causes a nucleus to split into fragments and more neutrons. Alpha decay (n,α): neutron causes a nucleus to lose two protons and two neutrons in the form of a helium nucleus. This interaction is important when considering the transmutation of elements, and how radioactivity is induced in a material. Virtually any possible interaction has its own specific cross section; the ones above are just some of the most common. Other important interactions include (n,p) and (n,2n). Cross-Section and Neutron Energy ![Graph showing neutron cross section against neutron energy](images/cross-section-energy-graph.png) Graph showing neutron cross-section against neutron energy.  [Adapted from graph by CC[BY][SA], source data unknown] As the log-log graph above shows, cross-sections vary with neutron energy. Since most neutrons are in the thermal range (about 0.025 eV, or about 4 × 10−21 J), cross-sections are often quoted for this neutron energy. Even though cross-sections do vary with energy, nuclides still have characteristically "high" or "low" cross sections. For example, as the graph shows, 235U (n,γ) has a higher cross section than 233U (n,γ) over almost all energy ranges. The peaks in the graph are due to resonance effects. The reasons for these are beyond the scope of this TLP. The Macroscopic Cross-Section - So far we have examined the microscopic cross-section. When talking about actual materials, the macroscopic cross-section is more commonly used. Each element present in a material has its own macroscopic cross-section (m−1) defined by the following equation, where *N* is the nuclear number density as used earlier (m−3). \[\Sigma\_{i} = N\_{i}\sigma\_{i}\] And for the material as a whole, its macroscopic cross-section is therefore: \[\Sigma = N\_{1}\sigma\_{1} + N\_{2}\sigma\_{2} + \cdot \cdot \cdot + N\_{i}\sigma\_{i} + \cdot \cdot \cdot\] The macroscopic cross-section is the probability that a neutron will undergo a reaction per unit path length travelled in the material. The probability that a neutron travels a distance *x* without interacting therefore is: $$\exp(-\Sigma x)$$ And the neutron mean free path, i.e. the average distance a neutron travels before interacting, can be found by integrating over this quantity as follows: $$\lambda = \int\_0^\infty x {\rm{P}}(x){\rm{d}}x = \int\_0^\infty x \Sigma \exp ( - \Sigma x){\rm{d}}x = {1 \over \Sigma }$$ Interactive Graph of Macroscopic Cross Section Try out the graph below to see what effect mass, density and microscopic cross-section have on the macroscopic cross-section. The nuclear number density is calculated by simply working out the number of nuclei present in the material given the molar mass and its density. This method makes the approximation that all the mass is present as nuclei, which is true to a reasonable degree of accuracy (electrons also have mass, but are only about 1/2000 the mass of a single nucleon and so do not contribute significantly). The graph is editable: double-click on a cell to edit the numbers given. The arrows along the *x*-axis show the mean free path of the neutron through the material. Mechanisms of Radiation Damage 1 Most of the radiation damage in a reactor is from the neutron flux being produced in the core. Other forms of radiation, such as gamma radiation, are very weakly interacting and dont produce much effect. The principles in this section can in theory apply to any material, but the key materials are steels (e.g. a cold-worked 316 stainless steel). Transmutation – (n, α) – Production of Helium - As seen in the previous section, there are several ways in which neutrons can interact with nuclei, including absorption of the neutron by the nucleus, making the nucleus unstable so that it decays, releasing an alpha particle in the process. Alpha particles consist of two protons and two neutrons, i.e. a 4He nucleus.  Since they are 2+ positively charged, they are very highly ionizing, and will they quickly pick up electrons from the surrounding lattice and become elemental helium. In stainless steels, the (n, α) interaction does not occur often with iron itself, but is mostly as a result of the nickel content of the alloy, as the graph of its cross section below shows. ![Effect of the nickel content of iron alloy on its (n,alpha) interaction](images/nickel-graph.jpg) The presence of helium in the metal causes embrittlement and can act as a nucleation point for voids, which can lead to swelling. Additionally, the neutron flux can induce further radiation. This occurs when a neutron transmutes an element into a radioactive one. This is undesirable, because it creates more low-level radioactive waste to contain when the reactor is eventually decommissioned. Frenkel Defects - There are many proposed mechanisms of radiation damage, but on a fundamental level a single neutron scattering event can be considered. If a neutron of sufficient energy scatters off a nucleus, the nucleus itself is displaced.  The atom associated with the nucleus finds itself embedded into the structure elsewhere in a high-energy, *interstitial* site. It is termed a *self-interstitial* as the matrix and interstitial atoms are in principle the same. The site the atom previously occupied is now empty:  it is a *vacancy*. In this way, self interstitial-vacancy pairs are formed, and these are called Frenkel defects. Threshold Energy At lower energies, the neutron collision causes the nucleus to vibrate, but the nucleus is not displaced. The excess energy is dissipated through the lattice as heat. The threshold energy to form a Frenkel defect depends on the nuclei present and the structure of the material (e.g. the phase of iron). It is typically in the range 10–50 eV (2–8 × 10−18 J). Note that when the neutron scatters off a nucleus, not all of its energy is transferred. This means that the minimum kinetic energy of the neutron is be larger than this threshold value, typically by a factor of 2–3. This threshold energy is commonly given the symbol Ed. It is the energy required to overcome the potential barrier to move from one lattice site to another, and is approximately twice Es, the energy of sublimation, since twice as many bonds are broken to move an atom within a lattice as removing it from its surface, plus a contribution of 4–5 Ec, where Ecis the energy loss by electron stopping (required to allow the lattice to relax after the atom has been displaced). Displacement Spikes - Neutron scattering events are not isolated. On average, each displaced atom might then go on to displace further atoms, and likewise the neutron that caused the first displacement might go on to displace further atoms. This means that there is a local cascade of displacements, known as a *displacement spike*, within which there is a large amount of disorder in the structure. This is illustrated with a simulation, below: The Kinchin and Pease Model - A neutron scattering from an atom imparts an energy *E*p to it.  This *primary knock-on atom* (PKA) with energy *E*p then displaces other atoms, ultimately giving a displacement cascade if *E*p is high enough.  The number of atoms displaced by the PKA is difficult to calculate, but a simple model (attributed to Kinchin and Pease) can capture much of the basic physics.  The assumptions are: * the cascade is a sequence of two-body elastic hard-sphere collisions; * a minimum energy transfer *E*d is required for displacement; * the maximum neutron energy available for transfer is the cut-off energy *E*c, set by loss to the electrons (electron stopping); * the atoms are randomly distributed, so that channelling and other effects of crystal structure are ignored. A full derivation can be found in by Gary Was. The average number of atoms displaced by a PKA of energy *E*p is: 0                      for *E*p< *E*d 1                      for*E*d< *E*p< 2*E*d *E*p/2*E*d            for 2*E*d< *E*p<*E*c *E*c/2*E*d            for *E*p ≥*E*c Mechanisms of Radiation Damage 2 Formation of Dislocation Loops Both the interstitial atoms and vacancies can diffuse through the lattice, but the interstitial atoms are more mobile. Both interstitials and vacancies are eventually removed from the lattice (when they reach sinks such as dislocations or grain boundaries). However, they are also always being generated by the neutron radiation. Thus steady-state populations of interstitials and vacancies are formed. There is a tendency for interstitial atoms and vacancies respectively to aggregate together into discs. This is again illustrated through an animation, below: When there is a sufficient supersaturation of vacancies, the disc of vacancies grows and the gap between the planes on either side collapses to form a continuous lattice with a dislocation loop. Since the Burgers vector is normal to the plane loop, it is an edge dislocation and grows/shrinks by climb and moves by glide along a prism; it is termed a *prismatic* loop. Nucleation and Growth of Voids Vacancy dislocation loops should reduce the volume of the material whilst interstitial dislocation loops should increase it, as seen in the animation above. And, in general, we expect compensating vacancy and interstitial effects to leave the material with approximately the same volume. However, irradiated materials are in fact observed to swell. To explain this, we consider what happens when vacancy loops join together. In practice, when the loops join they form three dimensional cavities a few nm in diameter. These *voids* contribute no net change in volume to the material, and so this just leaves the interstitial loops, which do lead to swelling in the material. In the absence of any driving force, it would seem unlikely that enough voids would form for any appreciable effect to be observed on the material. This is where the transmutation of nickel becomes important, since the helium atoms produced are very small and are thus extremely mobile as interstitial atoms in the lattice. They quickly form bubbles, and these helium bubbles can act as nucleation points for void formation. Effects of Radiation Damage = The previously discussed changes in microstructure due to radiation damage affect the macroscopic, mechanical properties of the material. These effects happen for a variety of reasons, but are generally less noticeable at higher temperatures as the damage caused by radiation is constantly being annealed out:  at higher temperatures vacancy and interstitial mobility are increased so they are removed from the lattice faster. The following table gives an overview of the effects observed. | | | | - | - | | **Material Property** | **Effect of Radiation Damage** | | Yield strength | Increases on irradiation, along with a decrease in plastic flow range. | | Ultimate tensile strength | This also increases on irradiation, but less than the yield strength. | | Ductile-brittle transition temperature | This marks the transition between a material exhibiting ductile behaviour at higher temperatures and brittle behaviour at lower temperatures. It increases significantly on irradiation, which can present a problem when the reactor vessel cools on shut down when internal pressure within the reactor is still high, and so fracture can occur if this is not taken into account. | | Youngs modulus | Small increase on irradiation. | | Hardness | Increase. | | High-temperature creep rate | Increase during irradiation. | | Ductility | Decrease. | | Stress-rupture strength | Decrease. | | Density | Decrease as the material swells on irradiation. | | Impact strength | Decrease. | | Thermal conductivity | Decrease on irradiation since lattice disorder increases, thus increasing phonon scattering. | | Electrical conductivity | Decrease for similar reasons to thermal conductivity. | The following sketch shows the stress-strain curve for a typical steel and its different form after irradiation. | | | - | | A stress-strain curve for a stainless steel both before and after irradiation | | A stress-strain curve for a stainless steel irradiated or not. | Fuel and Cladding = ### Choice of Fuel There are several important factors when choosing a nuclear fuel: * The fuel itself must be easily fissionable, preferably fissile. * The fuel must release sufficient quantities of neutrons per neutron captured to be able to sustain a fission chain reaction. Too many neutrons produced and a runaway, supercritical, reaction would occur which would be disastrous in the case of a nuclear reactor. The ratio of neutrons produced to neutrons absorbed can, however, be adjusted through use of control rods and moderators. * The fuel must have a sufficiently **long half-life**. Fissile materials, by their very nature due to their instability, are radioactive. Radioactive materials decay exponentially, and this decay is quantified by their *half-life*, the time it takes for half of the radioactive nuclei present to decay into a more stable form. Nuclear fuels must therefore have a sufficiently long half-life, otherwise the nuclei would decay into a useless form before fission could be induced in a controlled manner. * **Economic factors** are also important. The fuels must be abundant and readily available. Uranium is the only naturally abundant fissile material and exists in an ore, called *uraninite* (also known as *pitchblende*), which is primarily uranium (IV) oxide, mined primarily in Canada, Australia and Kazakhstan. It has an isotopic composition of 99.3% of the fissionable but not fissile 238U and just 0.7% of the fissile 235U. This means that it must first be enriched, a difficult and expensive process which raises the proportion of 235U to 238U. * No plutonium occurs naturally, except in trace amounts as a result of the natural decay of uranium. It is instead made as a by-product in nuclear reactors, and must first be extracted from use nuclear fuels before it can be used. * **Political concerns** are important; heavily enriched uranium and plutonium can be used for atomic weaponry and so are not favoured. This is why there is current interested in the thorium cycle, which produces 233U. Though this can in theory be used in atomic weaponry, it is always contaminated with 232 U, which is highly dangerous because of the amount of gamma radiation it emits and making it very difficult to handle. Before the 233U could be used as a weapon, the 232U would have to be removed, which is again very difficult. This inherent proliferation resistance, and thoriums natural abundance (3–4 x as abundant as uranium), has increased interest in it in recent years. * The common fission fragments formed are also important, both in the short-term due to the effects they have on structural materials in the reactor and also in the long-term since some fragments have very long half-lives and so will present problems as nuclear waste since it will need to be stored for much longer periods of time. It should be noted, however, that the longer the half-life of the fission product the *less* dangerous it is to people. This is a somewhat counter-intuitive point that is often missed, since a longer half-life means that less of the material decays and hence gives off dangerous radiation in a given period. **Form of Fuel** Metallic uranium is not favoured as a fuel since it is dimensionally unstable under irradiation, flammable, can readily corrode in oxygen-containing atmospheres, and can produce uranium dust which has a low-temperature flash-point and which can cause serious health problems if inhaled. There is some interest in using metallic U alloys as fuel when a particularly high density of fissile or fissionable nuclides is required. As an alternative, ceramic forms can be used, including UO2, U3O8, UC, U2C3, UN, U3Si and USi. The most common of these is uranium dioxide, which has the calcium fluorite structure shown in the image below. ![calcium fluorite structure of UO2](images/UO2.png) ### Choice of Cladding The nuclear fuel cannot be allowed to make direct contact with the coolant inside the reactor vessel, due to the potential for radioactivity to be released into the environment. Instead, cladding has to be used to surround the fuel. Key design criteria are that the cladding should: * be transparent to neutrons, so that it doesnt absorb neutrons that could be used to induce further fission. * have a high thermal conductivity, and not have a high thermal expansion coefficient. Key problems include: * hydrogen embrittlement due to (n, p) reactions inside cladding. * swelling due to release of fission product gases. Common choices for cladding material are stainless steel (in FBRs) Zircaloy (in PWRs) and, in the past, Magnox. Moderators A moderator is designed to slow down fast neutrons such that they are more easily absorbed by fissile nuclei. There are two main factors in choosing a moderator: 1. The moderator must not absorb neutrons itself. This means it should have a relatively low neutron absorption cross-section. 2. The moderator should efficiently slow down the neutrons. Modelling neutron-nuclei collisions as a classical elastic collision, in much the same way as gas molecules are modelled, gives the result that the closer the nucleus mass is to that of the neutron, the more energy will be transferred in the collision. This means that lighter elements are favoured. The following equation shows the fractional energy lost per collision, ξ, on average for a neutron colliding with a nuclide of mass *A*. *E*0 is the initial energy of the neutron, and *E*s is the energy after scattering has occurred. $$\xi = \left \langle \ln \left( \frac{E\_{0}}{E\_{s}} \right) \right \rangle = 1 - \frac{(A-1)^{2}} {2A} \ln \left( \frac{A+1}{A-1} \right)$$ It is beyond the scope of this TLP to derive this equation, but the basic physics is straightforward. In elastic collisions kinetic energy and momentum are conserved and the energy lost by the neutron can be calculated for any given angle of contact. In three dimensions it is necessary to integrate over all possible angles to obtain an average. The equation is well approximated by: $$\xi \approx \frac{6}{3A+2}$$ This is good enough for most purposes (to see the error in the approximation click .). Since this is a classical derivation applied to a quantum situation, there is probably more error due to the original assumptions than this mathematical approximation. Try out the interactive movie below to see this effect in action. The movie obeys the same physics used to derive the above equations, except in a two-dimensional rather than three-dimensional case. The simulation is meant to show energy lost per collision, and does not give an accurate impression of how often these collisions occur: interatomic distances have been greatly reduced for illustrative purposes.  In practice it is the scattering cross-section which determines the *rate* of neutron collisions. Finally, the above analysis can be modified with respect to the neutron cross-sections, by considering the ratio ξ (Σs / Σa). This weights ξ with the absorption and scattering cross-sections. The higher this ratio, the more appropriate the material is as a moderator. Graphite Historically, graphite has been a very popular neutron moderator, and is used in the majority of British reactors. However, the graphite used has to be highly pure to be effective. Graphite can be manufactured artificially using boron electrodes, and even a small amount of contamination from these electrodes can make the graphite unsuitable as a moderator since boron is a highly effective neutron absorber, and so it “poisons” the graphite by increasing the overall absorption cross section, Σa. It also has unique problems: it stores energy in metastable local defects when it is irradiated, particularly at lower temperatures. This so-called Wigner energy can be released suddenly when the graphite spontaneously returns to its stable phase, and this sudden rise in temperature is not desirable since it can cause further structural damage within the reactor. This means that graphite has to be annealed to remove the excess energy in its lattice in a controlled manner. The following movie shows three-dimensional models of the graphite lattice and demonstrates the origins of this metastable phase within the graphite lattice. Other common choices: - ### Light Water Hydrogen is a good candidate for a neutron moderator because its mass is almost identical to that of the incident neutron, and so a single collision will reduce the speed of the neutron substantially. However, hydrogen also has a relatively high neutron absorption cross-section due to its tendency to form deuterium, and so light water is only suitable for enriched fuels which allow for a higher proportion of fast neutrons. ### Heavy Water Heavy water has similar benefits to light water, but because its water molecules already have deuterium atoms it has a low absorption cross section. Additionally, because of the high energy of the fast neutrons, an additional neutron might be knocked out of the deuterium atom when a collision occurs, thus increasing the number of neutrons present. The main disadvantage of heavy water as a moderator is its high price. ### Beryllium Beryllium-9 is favoured, because in addition to being a light element, on collision with a fast neutron, it can react as follows: 9Be + n → 8Be + 2n The main problems with beryllium are its brittleness as a metallic phase and its toxicity, which make it less favoured as a moderator than the other materials mentioned here. ### Lithium Fluoride Lithium fluoride is commonly used in molten salt reactors. It is mixed with the molten metal and the fuel, and so its structural properties as a solid are not important. Summary = In this TLP, the process of nuclear fission has been described, thus explaining the common choices for nuclear fuel used commercially. Materials selection for the major components of a nuclear reactor have also been explored, including: * Moderators, and how they work best when they consist of light nuclides with relatively low absorption cross-sections. * Control rods, which require high absorption cross-sections, and how the same nuclides found in control rods, e.g. boron, can act as *poisons* significantly reducing the efficiency of a reactor if found elsewhere, such as in moderators. * Cladding, which experiences much stronger radiation fluxes and extremes of temperature than any other structural material in the reactor, and so must be able to withstand these conditions. Concepts such as *neutron cross-section* and *neutron flux* have been explained, and this allowed mechanisms of radiation damage inside structural steels, and the consequences of this, to be discussed. Radiation materials science is a mature field, but there any many challenges for materials to permit more efficient operation, improve safety and reliability and reduce costs. As this TLP has shown, the basic mechanisms of damage caused by low levels of radiation are now well understood, but the much higher levels of radiation such as those that will be experienced in the new experimental fusion reactor, ITER, have yet to be satisfactorily contained. This TLP has given only an introduction to some of the important phenomena.  To learn more, consult the Going Further section. Test your understanding of this TLP by answering some of the questions in the next section. Questions = The cross-section data needed to answer these questions have been supplied here. To find further cross-sections, consult the (ENDF). ### Quick questions*You should be able to answer these questions without too much difficulty after studying this TLP. If not, then you should go through it again!*1. Check which elements are fissionable but not fissile: | | | | | - | - | - | | | a | U-233 | | | b | U-235 | | | c | U-238 | | | d | Pu-239 | | | e | Th-232 | 2. Which of the following are **NOT** suitable moderating materials? | | | | | - | - | - | | | a | Deuterium (A=2) | | | b | Helium (A=4) | | | c | Beryllium-9 (A=9) | | | d | Boron (A=11) | | | e | Graphite (A=12) | | | f | Iron (A=56) | 3. Which of the following would **NOT** be classified as "Sabsorption" cross-sections? | | | | | - | - | - | | | a | (n, n) | | | b | (n, n') | | | c | (n, γ) | | | d | (n, f) | | | e | (n, α) | | | f | (n, p) | 4. Which of the following discourages void formation? | | | | | - | - | - | | | a | More interstitial atoms. | | | b | Fewer interstitial atoms. | | | c | More vacancies. | | | d | Fewer vacancies. | | | e | More transmutation. | | | f | Less transmutation. | 5. Which of the following material properties have lower values after irradiation? | | | | | - | - | - | | | a | Yield strength | | | b | Thermal conductivity | | | c | Electrical conductivity | | | d | Tensile strength | | | e | Ductility | | | f | Density | | | g | Creep rate |### Deeper questions*The following questions require some thought and reaching the answer may require you to think beyond the contents of this TLP.*6. Zirconium minerals are often found with small amounts of hafnium present due to their chemically similar nature. Zirconium is also used as a primary component of Zircaloy, a cladding material designed to be almost transparent to neutrons. By comparing how the mean free path of a thermal neutron in pure zirconium differs from that of zirconium with 0.01% hafnium impurities, comment on the consequences of hafnium impurities in Zircaloy. (Zr: A = 91.22, ρ = 6.52 g cm−1, σc = 0.18 barns; Hf: A = 178.49, ρ = 13.31 g cm−1, σc = 105 barns) Going further = ### Books * Was, G. A., *Fundamentals of Radiation Materials Science*, Springer, 2007. (and see the movies at ) * Ma, B. M., *Nuclear Reactor Materials and Applications*, Van Nostrand, 1983. * Glasstone, S. and Sesonske, A., *Nuclear Reactor Engineering, Third Edition*, Van Nostrand, 1981 ### Websites * Evaluated Nuclear Data File: [] * Nuclear Data Services: [ /] ### Papers and other publications * MRS Bulletin, various articles feature topics relating to Nuclear power including Volume 34, January 2009. * On void formation: L.K. Mansur, *Theory and experimental background on dimensional changes in irradiated alloys*, Journal of Nuclear Materials, Volume 216, October 1994, Pages 97-123, DOI: 10.1016/0022-3115(94)90009-4. * On Wigner energy: R.H. Telling, et al., *Wigner defects bridge the graphite gap*, Nature Materials, Volume 2, April 2003, Pages 333-337, DOI: 10.1038/nmat876.
Aims On completion of this teaching and learning package you should: * Appreciate the differences between reflected-light and transmitted-light microscopes. * Understand the use of polarised light in a transmission microscope. * Be able to set up and use a microscope to study a range of specimens. * Understand the steps required to prepare metallographic, ceramic and polymer specimens. Introduction Optical microscopes have a wide variety of applications; they are very powerful tools for inspecting the microstructure of a great range of materials. It is important to use the appropriate mode for the specimen, choosing from reflected-light or transmitted-light modes. *Reflected-light microscopy* is used for a range of materials, including metals, ceramics and composites.  Contrast between different regions when viewed in *reflected light* can arise from variations in surface topography and differences in reflectivity (e.g. of different phases, different grain orientations, or boundary regions). These features are revealed by a series of specimen preparation techniques which, when carried out with care, can produce useful, high quality images. *Transmission mode* can be used when the specimen is transparent. The specimen is usually in the form of a thin slice (e.g. tens of microns thick). Contrast arises from differences in the absorption of light through different regions. This method is used for the examination of minerals and rocks, as well as glasses, ceramics and polymers. In addition, the transmission mode can often be further enhanced with use of polarised light. *Polarised light microscopy* is a specialised use of the transmission mode, and contrast is due to differences in birefringence and thickness of the specimen. This can allow the observation of grains, grain orientation and thickness. Sample Preparation For Metals When preparing samples for microscopy, it is important to produce something that is representative of the whole specimen. It is not always possible to achieve this with a single sample. Indeed, it is always good practice to mount samples from a material under study in more than one orientation.   The variation in material properties will affect how the preparation should be handled, for example very soft or ductile materials may be difficult to polish mechanically. ### Cutting a specimen It important to be alert to the fact that preparation of a specimen may change the microstructure of the material, for example through heating, chemical attack, or mechanical damage. The amount of damage depends on the method by which the specimen is cut and the material itself. Cutting with abrasives may cause a large amount of damage, whilst the use of a low-speed diamond saw can cause fewer problems. There are many different cutting methods, although some are used only for specific specimen types. ### Mounting Mounting of specimens is usually necessary to allow them to be handled easily. It also minimises the amount of damage likely to be caused to the specimen itself. The mounting material used should not influence the specimen as a result of chemical reaction or mechanical stresses. It should adhere well to the specimen and, if the specimen is to be electropolished (an electrolytic process) or examined under a Scanning Electron Microscope, then the mounting material should also be electrically conducting. Specimens can be hot mounted (at around 200 °C) using a mounting press, either in a thermosetting plastic (*e.g.* phenolic resin), or a thermosoftening plastic (*e.g.* acrylic resin). If hot mounting will alter the structure of the specimen a cold-setting resin can be used, *e.g*. epoxy, acrylic or polyester resin. Porous materials must be impregnated by resin before mounting or polishing, to prevent grit, polishing media or etchant being trapped in the pores, and to preserve the open structure of the material. A mounted specimen usually has a thickness of about half its diameter, to prevent rocking during grinding and polishing. The edges of the mounted specimen should be rounded to minimise the damage to grinding and polishing discs. ![A diagram of a mounted specimen](images/specimen.gif) A diagram of a mounted specimen ### Grinding Surface layers damaged by cutting must be removed by grinding. Mounted specimens are ground with rotating discs of abrasive paper flushed with a suitable coolant to remove debris and heat, for example wet silicon carbide paper. The coarseness of the paper is indicated by a number: the number of grains of silicon carbide per square inch. So, for example, 180 grit paper is coarser than 1200. The grinding procedure involves several stages, using a finer paper (higher number) for each successive stage. Each grinding stage removes the scratches from the previous coarser paper. This is more easily achieved by orienting the specimen perpendicular to the previous scratches, and watching for these previously oriented scratches to be obliterated. Between each grade the specimen is washed thoroughly with soapy water to prevent contamination from coarser grit present on the specimen surface. Typically, the finest grade of paper used is the 1200, and once the only scratches left on the specimen are from this grade, the specimen is thoroughly washed with water, followed by alcohol and then allowed to dry. It is possible to determine the start point for grinding using the following empirical relationship where the width of the largest scratch is measured under a microscope: ![](images/equ1.gif) This prevents putting more damage into the sample than already exists; the coarsest grades of paper are often not useful. Cleaning specimens in an ultrasonic bath can also be helpful, but is not essential. The series of photos below shows the progression of the specimen when ground with progressively finer paper. | | | | - | - | | Copper specimen ground with 180 grit paper | Copper specimen ground with 400 grit paper | | Copper specimen ground with 180 grit paper | Copper specimen ground with 400 grit paper | | Copper specimen ground with 800 grit paper | Copper specimen ground with 1200 grit paper | | Copper specimen ground with 800 grit paper | Copper specimen ground with 1200 grit paper | ### Polishing Polishing discs are covered with soft cloth impregnated with abrasive diamond particles and an oily lubricant. Particles of two different grades are used : a coarser polish - typically with diamond particles 6 microns in diameter which should remove the scratches produced from the finest grinding stage, and a finer polish – typically with diamond particles 1 micron in diameter, to produce a smooth surface. Before using a finer polishing wheel the specimen should be washed thoroughly with warm soapy water followed by alcohol to prevent contamination of the disc. | | | | - | - | | Copper specimen polished to 6 micron level | Copper specimen polished to 1 micron level. | | Copper specimen polished to 6 micron level | Copper specimen polished to 1 micron level. Ideally there should be no scatches after polishing, but it is often hard to completely remove them all. | Mechanical polishing will always leave a layer of disturbed material on the surface of the specimen, if the specimen is particularly susceptible to mechanical damage  (or excessive force is used in the grinding and polishing stages) debris can become embedded in the surface and plastic deformation may exist below the surface. Electropolishing or chemical polishing can be used to remove this, leaving an undisturbed surface. ### Etching Etching is used to reveal the microstructure of the metal through selective chemical attack.  It also removes the thin, highly deformed layer introduced during grinding and polishing. In alloys with more than one phase, etching creates contrast between different regions through differences in topography or reflectivity. The rate of etching is affected by crystallographic orientation, the phase present and the stability of the region.  This means contrast may arise through different mechanisms – therefore revealing different features of the sample. In all samples, etchants will preferentially attack high energy sites, such as boundaries and defects. ![Diagrams showing how contrast can arise in microscope image](images/etched.gif) The specimen is etched using a reagent. For example, for etching stainless steel or copper and its alloys, a saturated aqueous solution of ferric chloride, containing a few drops of hydrochloric acid is used. This is applied using a cotton bud wiped over the surface a few times (Care should be taken not to over-etch - this is difficult to determine, however, the photos below may be of some help). The specimen should then immediately be washed in alcohol and dried. Following the etching process there may be numerous small pits present on the surface. These are etch pits caused by localised chemical attack and, in most cases, they do not represent features of the microstructure. They may occur preferentially in regions of high local disorder, for example where there is a high concentrationof dislocations. If the specimen is over etched, ie. etched for too long, these pits tend to grow, and obscure the main features to be observed.  If this occurs it may be better to grind away the poorly etched surface and re-polish and etch, although it is important to remember what features you are trying to observe – repeatedly grinding a very thin sample may leave nothing to see. | | | | - | - | | Etched copper specimen | Over etched copper specimen | | Etched copper specimen | Over etched copper specimen | Ideally the surface to be examined optically should be flat and level. If it is not, the image will pass in and out of focus as the viewing area is moved across the surface. In addition, it will make it difficult to have the whole of the field of view in focus - while the centre is focused, the sides will be out of focus. By using a specimen levelling press (shown below) this problem can be avoided, as it presses the mounted specimen into plasticene on a microscope slide, making it level. A small piece of paper or cloth covers the surface of the specimen to avoid scratching. ![Labelled photograph of specimen levelling press](images/sample-press.jpg) Specimen levelling press Ceramics and Polymers = Ceramics ### Thin Sections To prepare ceramic specimens for transmitted light microscopy, a thin slice, approximately 5 mm thick, is cut using a diamond saw or cutting wheel. One surface is then lapped using liquid suspensions of successively finer silicon carbide powders. Between stages in the process the specimen must be thoroughly cleaned. After final washing and drying the ground surface is bonded to a microscope slide with resin. A cut off saw is used on the exposed face to reduce the thickness to about 0.7 mm. The specimen is then lapped to take it to the required thickness – usually about 30 µm, although some ceramic specimens are thinned to as little as 10 µm, due to their finer grain size. The slide is checked for thickness under the microscope, and then hand finished.  The slide is then covered with a protective cover slip. ### Lapping The lapping process is an alternative to grinding, in which the abrasive particles are not firmly fixed to paper. Instead a paste and lubricant is applied to the surface of a disc. Surface roughness from coarser preparation steps is removed by the micro-impact of rolling abrasive particles. ### Polished sections These differ from ordinary thin sections in that the upper surface of the specimen is not covered with a cover slip, but is polished. Care must be taken to prevent the specimen breaking. Sections may be examined using both transmitted and reflected light microscopy, which is particularly useful if some constituents are opaque. Polymers ### Thin sections Thin sections of organic polymers are prepared from solid material by cutting slices using a microtome – a mechanical instrument used for cutting thin sections. They must be cut at a temperature below the glass transition temperature of the polymer. A cut section curls up during cutting and must be unrolled and mounted on a microscope slide and covered with a cover slip. A few drops of mounting adhesive are used and must be compatible with the specimen. As always the mounting temperature must not affect the microstructure of the specimen. The thickness of cut slices of polymer tend to lie in the range 2 to 30 µm depending on the type of material. Harder polymers can be prepared in the same way as thin ceramic specimens. ### Polished sections These are prepared in the same way as metallographic specimens. Elastomers are more difficult to polish than thermosetting polymers and require longer polishing times. Lubricants used during polishing must not be absorbed by the specimen. As crystalline regions are attacked more slowly than amorphous ones, etching of polymer specimens can produce contrast revealing the polymer structure. Using the Reflection Microscope = Looking down a reflection microscope we see the light reflected off a sample.  Remember that contrast can arise in different ways.  Below is a diagram of a reflection light microscope; roll the mouse over the labelled parts to see a description. Using the Transmission Microscope = Transmission light microscopes are used to look at thin sections – the specimen must transmit the light.  Here is a diagram of a transmission microscope; roll the mouse over the labelled parts to see a description. For further information see . Using Microscopes = Both types of microscope are used in very similar ways, here are some guidelines as to how to set up a specimen to be observed: * The specimen is mounted and placed on the stage; begin by slowly increasing the power of the light source until there is a bright spot visible on the sample (without looking down the eye piece) * With the lowest magnification lens in place focus using the coarse focus knob: without looking down the microscope, lower the objective lens close to the specimen surface, and then use the coarse focus knob to slowly raise it until the circle of light on the specimen appears reasonably sharp. Now, looking through the eyepiece, adjust the coarse focus control. When looking down the eyepiece and using coarse focus, you should only ever adjust so as to move the sample away from the objective. * The eye piece distance (for binocular microscopes) should be adjusted to a comfortable separation and, looking through the eye pieces, use the fine focus knob to bring the image to a sharp focus. * The image should be focussed to the non-adjustable eyepiece and then the other changed such that it is also in focus. * To increase the magnification, slide the rotatable nosepiece around (ensuring the lens does not touch the specimen) and then re-focus using the fine focus (it should take very little adjustment!). Once a representative area is found, and focused a digital camera can be used to take a photo and a sketch can be made.  It is important, even in cases where there is access to microscope cameras, to make labelled sketches of important aspects of the field of view.  Remember that a sketch does not have to be a copy of what you see, but should include the key aspects of the microstructure. Scale bars Observations under a microscope are of no value if there is no scale accompanying them, so it is very important to understand the scale.  All sketches should have scale bars and microscope camera software often allows a scale bar to be added before saving the image (given the right information about magnification). The easiest way of measuring the size of a feature under a microscope is to relate it to the size of the field of view. The simplest way of achieving this is to measure the size of the field of view at a low magnification, and then scale the size appropriately as the magnification is increased. The field of view can be measured approximately by looking at a ruler under the lowest magnification lens. Accuracy can be improved by using a graticule. A graticule is a slide with a very fine grating which, if metric, will usually measure 1 mm across, and is divided into 100 segments, *i.e.* each segment is 10 µm across. This allows much greater accuracy in measuring the field of view, and so greater accuracy in measuring features. ![Metric graticule in polarised light](images/graticule.jpg) Metric graticule in polarised light On some microscopes, a scale bar is superimposed on one of the eyepieces, which can be used to further improve the accuracy of measuring feature sizes. The scale bar can be calibrated by observing either a graticule or a ruler at a low magnification. For example, if 1 division is equivalent to 20 mm with a ×5 magnification lens, each division is equivalent to 2 mm with a ×50 magnification lens. By measuring a feature using the scale in the eyepiece, the actual size of the feature can be calculated by knowing the width of the divisions in the eyepiece. The scale bar on the eyepiece is particularly useful because it can be rotated, and so both widths and lengths can be measured without rotating the specimen. Polarised light = Visible light is a form of electromagnetic radiation, with electric and magnetic vectors oscillating perpendicular to the direction of propagation.  Usually the oscillations are in any direction perpendicular to the direction of propagation; polarised light is light with oscillations in a few, restricted, directions. Polarised light has a variety of uses including in Polaroid sunglasses, which block out light of a certain polarisation, removing much of the glare from the ground.  The following flash animation gives an introduction to how polarised light is used in microscopy. As can be seen, two polarising films at 90° to one another transmit no light; inserting an optically anisotropic material between the polarisers can result in the light vector being rotated. When light enters an optically anisotropic material, the light vectors are polarised in two Permitted Vibration Directions (PVDs).  The difference in refractive index of these directions results in a retardation of one ray with respect to the other; the rays propagate at different speeds within the material and exit with a phase difference between them.  This causes one light frequency (i.e. one wavelength) to show destructive interference, and that wavelength of light is lost.  Other wavelengths will constructively interfere (to different extents), so different colours are seen, depending on the retardation.  This is called birefringence. A quartz wedge under crossed polars shows how the observed colour changes as the retardation increases.  In the photo below, the wedge increases in thickness from left to right. As the thickness increases, the retardation also increases. The below shows a range of birefringent colours as its thickness varies.  The relation between retardation, birefringence and thickness can be seen on a . ![A quartz wedge under crossed polars, (increases in thickness from left to right) so the retardation also increases](images/wedge.jpg) The retardation also depends on the orientation of the optical axes of the material relative to the polarised light (so rotating the stage may change the colour).  The arrangement of the crossed polars also allows for the insertion of plates at 45° to the planes of polarisation. These are used to enhance the contrast in a specimen. For further effects, it is also often possible to rotate one of the polarisers if crossed polars are not to be used. When observing a specimen, differences in birefringence allow phases and grains to be identified. For example, different grain orientations may exhibit differences in birefringence and this will cause them to appear a different colour. The series of photos below shows the difference in the appearance of some glass ceramic specimens as different plates are inserted. | | | | - | - | | Glass ceramic transmission microscope image made with unpolarised light. | Glass ceramic transmission microscope image made with polarised light. | | Glass ceramic transmission microscope image made with unpolarised light. | Glass ceramic transmission microscope image made with polarised light. See also the | | Glass ceramic transmission microscope image made with polarised light and quarter wave plate. | Glass ceramic transmission microscope image made with polarised light and full wave plate. | | Glass ceramic transmission microscope image made with polarised light and quarter wave plate. | Glass ceramic transmission microscope image made with polarised light and full wave plate. See also the | Optically anisotropic materials aligned with one of the permitted vibration directions parallel to the direction of the polarised light vector appear ‘in extinction (i.e. black) between crossed polars. Resolution and Imaging The *limit of resolution* (or *resolving power*) is a measure of the ability of the objective lens to separate in the image adjacent details that are present in the object. It is the distance between two points in the object that are just resolved in the image. The resolving power of an optical system is ultimately limited by diffraction by the aperture. Thus an optical system cannot form a perfect image of a point. For resolution to occur, at least the direct beam and the first-order diffracted beam must be collected by the objective. If the lens aperture is too small, only the direct beam is collected and the resolution is lost. ![Diagram illustrating diffraction](images/diagram4.gif) Consider a grating of spacing d illuminated by light of wavelength λ, at an angle of incidence i. ![Diagram illustrating diffraction](images/diagram5.gif) The path difference between the direct beam and the first-order diffracted beam is exactly one wavelength, *λ*. So, d sin *i* + d sin *α* = λ where 2α is the angle through which the first-order beam is diffracted. Since the two beams are just collected by the objective, i = α, thus the limit of resolution is, $${d\_{\min }} = {\lambda \over {2\sin \alpha }}$$ The wavelength of light is an important factor in the resolution of a microscope. Shorter wavelengths yield higher resolution. The greatest resolving power in optical microscopy requires near-ultraviolet light, the shortest effective visible imaging wavelength. ### Numerical Aperture The numerical aperture of a microscope objective is a measure of its ability to resolve fine specimen detail. The value for the numerical aperture is given by, Numerical Aperture (NA) = n sin α where n is the refractive index and equal to 1 for air and α is the half angle subtended by rays entering the objective lens. Numerical aperture determines the resolving power of an objective, the higher the numerical aperture of the system, the better the resolution. ![Diagram illustrating relationship between numerical aperture and resolution](images/diagram6.gif) | | | | - | - | | Low numerical aperture Low value for a Low resolution | High numerical aperture High value for a High resolution |### Airy Discs When light from the various points of a specimen passes through the objective and an image is created, the various points in the specimen appear as small patterns in the image. These are known as Airy discs. The phenomenon is caused by diffraction of light as it passes through the circular aperture of the objective. Airy discs consist of small, concentric light and dark circles. The smaller the Airy discs projected by an objective in forming the image, the more detail of the specimen is discernible. Objective lenses of higher numerical aperture are capable of producing smaller Airy discs, and therefore can distinguish finer detail in the specimen. The limit at which two Airy discs can be resolved into separate entities is often called the Rayleigh criterion. This is when the first diffraction minimum of the image of one source point coincides with the maximum of another. | | | | | - | - | - | | Diffraction pattern for unresolvable Airy discs | Diffraction pattern for Airy discs at the Rayleigh Criterion | Diffraction pattern for resolvable Airy discs | | Unresolvable | Rayleigh Criterion | Resolvable | Circular apertures produce diffraction patterns with circular symmetry. Mathematical analysis gives the equation, $${d\_{\min }} = {\lambda \over {2\sin \alpha }}$$![Equation](images/eqn2.gif) θR is the angular position of the first order diffraction minimum (the first dark ring) λ is the wavelength of the incident light d is the diameter of the aperture From the equation it can be seen that the radius of the central maximum is directly proportional to λ/d. So, the maximum is more spread out for longer wavelengths and/or smaller apertures. The primary minimum sets a limit to the useful magnification of the objective lens. A point source of light produced by the lens is always seen as a central spot, and second and higher order maxima, which is only avoided if the lens is of infinite diameter. Two objects separated by a distance less than θR cannot be resolved. Summary = * The optical microscope is a very useful tool for the observation of materials and can be used to gain valuable information about a large variety of specimens. Some knowledge of the material and the information that is required is essential to determine the best techniques to employ when preparing and examining specimens. * Sample preparation is a critical part of microscopy, as this determines the quality of the images produced. Many techniques, when correctly applied to a specimen, can enhance the information present. * One of the limitations of the optical microscope is that of resolution. High resolution imaging is more commonly carried out in a scanning electron microscope (SEM). * In addition, for 'transparent' specimens, in particular those of anisotropic materials, polarised light microscopy can offer large benefits, with high contrast possible. Questions = ### Deeper questions*The following questions require some thought and reaching the answer may require you to think beyond the contents of this TLP.*1. How should the initial focusing of the microscope be done? | | | | | - | - | - | | | a | With the coarse focus moving the lens towards the specimen. | | | b | With the fine focus moving the lens towards the specimen. | | | c | With the coarse focus moving the lens away from the specimen. | | | d | With the fine focus moving the lens away from the specimen. | 2. The specimen preparation is important in metallurgy because: | | | | | - | - | - | | | a | A poorly prepared specimen can damage the microscope. | | | b | A poorly prepared specimen will distract from features on the specimen. | | | c | Only a well prepared specimen will reflect light. | | | d | A poorly prepared specimen will corrode, and the resulting images will be misleading. | 3. When increasing the magnification on the microscope, which of the following occurs? | | | | | - | - | - | | | a | The depth of field increases. | | | b | The resolution limit decreases. | | | c | The visible area decreases. | | | d | The contrast increases. | 4. When the aperture stop is made smaller, which of the following occur? | | | | | | - | - | - | - | | Yes | No | a | The depth of field increases. | | Yes | No | b | The resolution decreases. | | Yes | No | c | The contrast increases. | | Yes | No | d | The brightness increases. | 5. The red tint plate (also known as a full wave sensitive tint plate) increases the contrast in a polarised light microscope because: | | | | | - | - | - | | | a | Our eyes are more sensitive to red light, so it is easier to see the light and dark areas when there is a red tint plate. | | | b | The red tint plate only lets a small window of wavelengths through, and so increases the birefringence. | | | c | The red tint plate displaces the ordinary and the extraordinary beams by an extra wavelength, so that small differences in birefringence cause large differences in colour. | | | d | The red tint plate increases the differences in birefringence in the material so that the different grain directions cause a greater difference in colour than in just the polarised light. | 6. Contrast in reflected microscopy tends to be caused by: | | | | | - | - | - | | | a | Variations in topography and differences in reflectivity of areas. | | | b | Only differences in reflectivity of areas. | | | c | Only topography. | | | d | Variations in thickness of the specimen. | 7. If a graticule is observed under the 10x lens of a microscope so that the diameter of the field of view is from 150 μm to 450 μm on the graticule, what is the width of one lamella when it takes 15 lamellae to fill the field of view when viewed under the 50x lens. Going further = ### Books: R.C. Gifkins*, Optical microscopy of metals*, Pitman, 1970 R Haynes, *Optical microscopy of materials*, Kluwer Academic Publishers, 1984 Eugene Hecht, *Optics*, Addison Wesley, 2001 ### Websites: - a site with a lot of information about microscopy including pages on See for more information about different kinds of microscopy.
Aims On completion of this TLP you should: * Understand how materials-selection maps can be used to choose a suitable material for an engineering application. * Understand the use of merit indices in relation to materials-selection maps and be able to derive common examples. * Appreciate that biomaterials have different properties to man-made materials and can outperform common engineering materials for certain applications. * Be aware of why some biomaterials have developed specific properties. Before you start <! .style1 {font-family: "Times New Roman", Times, serif} > <! function MM\_openBrWindow(theURL,winName,features) { //v2.0 window.open(theURL,winName,features); } //> * You should be familiar with . * You should be familiar with Youngs modulus *E*, strength *σf*, density *ρ* and toughness (fracture energy *Gc* and fracture toughness *Kc*). * You should be familiar with . * You should understand with regard to beam deflections during cantilever bending. Introduction <! function MM\_openBrWindow(theURL,winName,features) { //v2.0 window.open(theURL,winName,features); } //>As a result of evolutionary selection, biomaterials are well adapted for their functions, as will be discovered in this TLP. (Note. In this TLP the term *biomaterials* refers to the materials of living systems and not man-made materials with biomedical applications). Unlike man-made materials, a limited range of ingredients (never metal) is used to display a wide range of properties. Charles Darwin himself noted that, ‘As a general principle, natural selection is continually trying to economise every part of the organisation. For instance, tendons and muscle are made of collagen, as are the cornea, skin and blood vessels. Whereas the manufacturing process determines engineering materials properties, in biomaterials the structures age and environment affect the materials properties, leading to variation in the properties of a specific biomaterial. Biomaterials tend to be strong, anisotropic and are usually composite materials. Biomaterials have many advantages, as they are sustainable, recyclable and biodegradable, unlike most common engineering materials. However, the hierarchical structure makes replicating the structure of biomaterials complicated and their use is restricted to ambient-temperature applications. Similarly to , biomaterials are classed into the following groups corresponding to shared characteristics: * Natural ceramics and ceramic composites (e.g. enamel, bone, shell, antler). * Natural polymers and polymer composites (e.g. proteins such as silk and polysaccharides such as cellulose). * Natural elastomers (e.g. skin, artery, cartilage). * Natural cellular materials (e.g. wood, cancellous bone, cork). This TLP shows how can be used to compare biomaterials with common engineering materials, looking specifically at using these maps in conjunction with . Four of the most commonly used materials-selection maps will be studied: * Youngs modulus against density, * Strength against density, * Youngs modulus against strength, * Toughness against Youngs modulus. These are particularly important for choosing the most suitable material for a specific application, and looking at the interesting properties specific to certain biomaterials such as *viscid silk* found as the capture threads in spiders webs. Young´s Modulus - Density selection map = <! .variable { font-family: "Times New Roman", Times, serif; font-style: italic; } .style2 {color: #0000FF} .style3 { color: #000000; font-style: italic; } > <! function MM\_openBrWindow(theURL,winName,features) { //v2.0 window.open(theURL,winName,features); } //> Imagine a material is needed to build an aircraft panel, this will be subject to bending moments, and so the deflection that occurs must be minimised. It is also necessary to keep the mass of the aircraft low and so the density of the material should also be minimised. It is important to find a balance between these different material properties in order to find the most suitable material for the application. This can be achieved by finding the appropriate merit index and comparing its value for different materials. Consider the bending of a flat panel of length L, width w and height h, subject to an end load F. The engineering application sets the size of the panel (L and w) and the load it must support (F). Thus L, w and F are not variables in our analysis. On the other hand, the height h is a variable in that it is not of direct interest: it may be equally valid to use a thin panel of a dense material, or a thick panel of a light material. The bending moment M at the root of the beam is given by M = FL, and this decreases to zero at the end of the beam. ![](figures/cantilever.png) Cantilever bending The second moment of area I () for such a panel is given by: $$I = {{w{h^3}} \over {12}}$$ Assuming only simple bending occurs (and the panel only deflects as shown in the picture, i.e. not forming any complex shapes), the deflection of the end of the panel \(\delta \)![](eqn/eqn_mod_dens/eq0002M.gif) is given by: $$\delta = {{F{L^3}} \over {3EI}}$$ The mass m of the beam is given by: $$m = Lwh\rho $$ Hence, having the height h as the free parameter, and combining the previous two equations by eliminating t gives: ![](figures/deflection-eqn.jpg) So to obtain the minimum deflection for a panel of free thickness and given mass, or equivalently the minimum mass for a given deflection, \({{{E^{1/3}}} \over \rho }\) must be maximised. This is an example of a merit index. As the other values in the equation are engineering parameters, the material chosen will not alter these values, and so they are excluded from the merit index, which is based only on materials parameters. Low density is clearly very important for this merit index and hence wood is favoured for applications requiring a flat sheet in bending as it has a low density due to the large voids contained in the structure. This merit index can now be represented on a materials-selection map, allowing easy comparison of the different materials available for this application. Most materials-selection maps, such as the one shown, are plotted on logarithmic scales. This allows a given value of the index to be indicated by a straight line: From the equation for deflection, \({{{E^{1/3}}} \over \rho }\) = \(k\), where k is a constant. Hence \(\log (E) = 3\log (\rho ) + {k^{'}}\) where \({k^{'}}\) is a different constant \(3\log (k)\) On the selection map shown below, moving a line of the correct gradient as far as possible to the top and the left (and hence maximising E and minimising \(\rho \)) will give materials with the best value of the merit index. Try this by moving the line for the appropriate merit index on the following materials-selection maps for engineering alloys and biomaterials. Other important merit indices used in relation to this chart are: * $${E \over \rho }$$ relating to the minimum deflection achievable on loading a strut or tie in tension with the cross section as a free parameter, or the minimum deflection achievable on bending a beam with the width as a free parameter, * $${{{E^{1/2}}} \over \rho }$$ relating to the minimum deflection achievable on bending a beam with the cross sectional area (of fixed shape) as a free parameter. It can be seen from the materials-selection chart that aluminium alloys behave well in bending, and hence are used in aeroplanes. Wood is also an impressive material for this application, and it was used in the past in the Mosquito aeroplane from World War II: ![mosquito aeroplane](figures/mosquito_sml.jpg) Mosquito aeroplane Wood is able to resist bending with a low mass, as it has evolved so that trees will not bend far in strong winds or under their own weight. To maintain an array of leaves catching light for photosynthesis, tree trunks and branches have developed the materials properties of low density and relatively high Youngs modulus. Strength - Density selection map <! .style1 { font-family: "Times New Roman", Times, serif; font-style: italic; } > <! function MM\_openBrWindow(theURL,winName,features) { //v2.0 window.open(theURL,winName,features); } //> Strength σf is the maximum stress which a material can support before failure, where failure is usually taken to be the end of the purely elastic regime. For ductile materials σf is the stress for the onset of plastic (irreversible) flow. Brittle materials break before they yield, so σf is the stress for the onset of brittle fracture. Merit indices used in conjunction with these maps are: \({{{\sigma \_f}} \over \rho }\), used to find the material giving the strongest strut or tie in tension, for a given mass with the cross-sectional area of the beam as the free parameter. \({{{\sigma \_f}^{2/3}} \over \rho } \), used to find the material giving the strongest beam (i.e. that supporting the largest bending moment before the onset of plastic yielding or other failure on the surface of the beam) of a given mass, on bending a beam with a specified cross-sectional shape but free cross-sectional area. ![](../images/divider400.jpg)   The merit index \( {{{\sigma \_f}^{1/2}} \over \rho } \) for maximising the strength of a beam under a bending load for a given mass, with a specified beam width, and unspecified height will now be derived: Consider a beam of length L, width w and height h, subject to an end load F. The second moment of area is: $$I = {{w{h^3}} \over {12}}$$ and the mass of the beam is: $$m = whL\rho $$  which can be rearranged in terms of the free parameter h as: $$h = {m \over {wL\rho }}$$ For beam bending, \(M = \kappa EI\) and \(\kappa = \)\({{{\sigma \_f}} \over {Eh}}\) (), therefore, $$M = {{{\sigma \_f}I} \over h}$$ Substituting in the value for I gives: $$M = {{{\sigma \_f}w{h^2}} \over {12}}$$ To find the desired merit index the free parameter h can then be eliminated from the equation giving: $$M = {{{m^2}} \over {12w{L^2}}}\left( {{{{\sigma \_f}} \over {{\rho ^2}}}} \right)$$ Hence maximising \({{{\sigma \_f}} \over {{\rho ^2}}}\) or \({{{\sigma \_f}^{1/2}} \over \rho }\) by moving the merit index line towards the top and the left of the above materials-selection map gives the material that is strongest in bending a beam of a given mass (given the conditions of fixed beam width and variable height). Young´s Modulus - Strength selection map <! .style3 { color: #000000; font-style: italic; } .style4 {font-family: "Times New Roman", Times, serif} > The Youngs modulus – strength materials-selection map is used in conjunction with merit indices relating to elastic deformation and elastic energy storage. For simple tensile loading, to assess differences in maximum recoverable elastic deformation, the merit index \({{{\sigma \_f}} \over E}\) is used. To compare maximum elastic strain energy per unit volume, the merit index is \({{\sigma \_f^2} \over E}\) used, and for the maximum elastic strain energy per unit mass, the merit index \({{\sigma \_f^2} \over {E\rho }}\) is used. ![](../images/divider400.jpg) To derive the merit index \({{{\sigma \_f}} \over E}\) , the cross-sectional shape of the tie in tension of the structure need not be considered, and engineering parameters are not involved in the derivation, making this a simple merit index to derive as follows: From the definition of Youngs modulus, while the material is behaving elastically: $$\sigma = E\varepsilon $$ Therefore the maximum elastic strain (i.e. the deformation at the yield point) depends simply on the maximum elastic stress (i.e. stress before failure) which is the strength *σf* : \(\varepsilon = {{{\sigma \_f}} \over E}\) So to find the material with the greatest maximum recoverable deformation, the merit index \({{{\sigma \_f}} \over E}\) is maximised using the materials-selection chart shown. This involves moving the merit index line towards the bottom and the right of the map. From doing this it can be seen that *cartilage*, *viscid silk*, *resilin* and *skin* have good values of this merit index. Cartilage is found on the end of bone in joints and so it is important that it is flexible while not being permanently deformed or breaking when the joints bend. The same argument applies to skin, it would be rather gruesome if our skin split open every time we bent our elbow! The merit index \({{{\sigma \_f}} \over E}\) is also used to find the best material for elastic hinges. This explains resilins high value of the merit index as it is found in insect wing hinges. The insect only has to pull the wing back and it will then be pulled forward by the elasticity of the hinge (with resilin, this is particularly efficient as it has a very low absorption of energy on elastic bending to and fro - i.e. resilin has a *high coefficient of restitution* or *resilience)*. This is discussed further in the teaching and learning package. ![](figures/spider_web_label_sml.jpg) Spider's web Viscid silk has a high value of this merit index, being able to stretch up to three times its own length. Viscid silk makes up capture threads in a spiders web: it must entangle flies giving an advantage in a high value. Viscid silk also needs to absorb the kinetic energy of the fly, corresponding to a high \({{{\sigma \_f}} \over E}\) value. This is discussed further in the teaching and learning package, where it is noted that viscid silk has (in contrast to resilin) a low coefficient of restitution - this reduces the elastic energy returned to the fly. Toughness - Young´s Modulus selection map = <! .style1 { font-family: "Times New Roman", Times, serif; font-style: italic; } > <! function MM\_openBrWindow(theURL,winName,features) { //v2.0 window.open(theURL,winName,features); } //> A material's resistence to cracking can be characterised in various ways, each relevant in particular circumstances: * , GC, to maximise the impact resistance of the material used. * , Kc [=(EGC)1/2], to maximise the resistance to cracking of a material under load. * \({\left( {{{{G\_C}} \over E}} \right)^{1/2}}\) to maximise the elastic stretching of a material before the onset of brittle facture (cracking). These merit indices are easily used with the Toughness-Young's Modulus materials-selection map. ![](../images/divider400.jpg) The following method is used to derive the merit index \({\left( {{{{G\_C}} \over E}} \right)^{1/2}}\). Consider stretching a material: the strain (or amount of stretching) of the material is given by the equation: $$\varepsilon = {\sigma \over E}$$ The amount of stretching at the fracture point is: $${\varepsilon \_{\max }} = {{{\sigma \_f}} \over E}$$ The stress to give fracture is given by the formula: $${\sigma \_f} = {\left( {{{{G\_C}E} \over {\pi c}}} \right)^{1/2}}$$, where 2c is the crack length. () Thus: \({\varepsilon \_{\max }}\) = \({\left( {{{{G\_C}E} \over {\pi c}}} \right)^{1/2}}\)\(\left( {{1 \over E}} \right)\) or \({\varepsilon \_{\max }}\) = \({\left( {{1 \over {\pi c}}} \right)^{1/2}}{\left( {{{{G\_C}} \over E}} \right)^{1/2}}\) Therefore, in order to maximise the stretching of the material without failure, \({\left( {{{{G\_C}} \over E}} \right)^{1/2}}\) must be maximised by moving the line corresponding to the merit index towards the top and the left of the materials-selection map above. ![](../images/divider400.jpg) Try this for yourself and you will see that skin is the best biomaterial. Skin clearly needs this property, as it is continually being stretched in normal life. Different materials show good values for other merit indices: for instance, antler has a high value of toughness, GC. Antler is a composite material made of the ceramic hydroxyapatite and the polymer collagen, similarly to compact bone, giving it a high toughness. This enables stags to fight with their antlers, generating large impacts, without the antlers cracking. Stags fight in this way, called rutting, as a mating ritual to prove to females that they are the strongest stag and hence will produce the healthiest offspring. Antlers are shed and re-grown each year, and are sometimes found in furniture and artwork. ![stags rutting](figures/stags_label_sml.jpg) Stags rutting. Wood tested parallel to its grain (i.e. for cracks in the plane perpendicular to the grain) has the highest value of fracture toughness, Kc, for a biomaterial. It is a fibre composite made up of cellulose fibres in a lignin matrix. (This is discussed in teaching and learning package). Trees could easily get small cracks in them and it is important that the trunk and branches do not fail in wind or under their own weight. The composite structure makes wood very *anisotropic*: its Kc in splitting mode (cracks parallel to grain) is very low. Comparison of engineering materials and biomaterials <! .style4 {color: #0000FF} > Sometimes it is important to look at more than one materials-selection map and merit index. For instance on comparison of the use of steel and aluminium as a material for aircraft, the merit index \({{{\sigma \_f}} \over \rho }\) for loading a tie in tension shows that steel outperforms aluminium in this context. (Of course, as shown on the maps, both steels and aluminium alloys are families of materials with broad ranges of properties.) In fact, steel wires were used by the Wright brothers and in World War I biplanes: ![Wright brother's plane. ](figures/wright_sml.jpg) Wright brother's aeroplane. However aluminium performs well under the elastic bending of beams and flat panels, having a greater value than steel for the merit indices \({{{E^{1/2}}} \over \rho }\) and \({{{E^{1/3}}} \over \rho }\) , and hence aluminium alloys are now used to make major parts of aircraft whereas steel is not. It may also be important for a material to have high values of more than one merit index. For example a material may be wanted that will be tough and also have a good value of Youngs modulus for a low relative cost per unit volume: ![](maps/engr-cost-modulus/cost-modulus-sml.jpg) Looking at the materials-selection maps, good materials that fit these requirements are lead alloys, zinc (Zn) alloys and steels. Consider a comparison of biomaterials to man-made materials, specifically commonly used engineering materials such as steel, aluminium alloys, titanium alloys, alumina and polyethylene. It can be seen that alumina, aluminium alloys, steel and titanium alloys perform quite well in terms of Youngs modulus – density related merit indices. This is due to their high values of Youngs modulus, despite their high densities. Polyethylene compares poorly with biomaterials in this respect however, due to its low value of Youngs modulus. Alumina, aluminium alloys, steel and titanium alloys are more commonly used in engineering applications than biomaterials. This is despite biomaterials, particularly types of wood, outperforming these materials for some of the merit indices on this materials-selection map. It is important to note that the distinction between the materials of living systems and conventional engineering materials is not absolute. Wood is more widely used than these engineering materials for low-tech applications, and is in fact the worlds principal material for building. This makes wood among the most common and important structural engineering materials. Although wood has only a quarter the strength of steel, it has four times the specific strength (\({{{\sigma \_f}} \over \rho }\) ) and is renewable, recyclable and requires only a low energy input to make. This makes the price of wood 1/60th per tonne the price of steel with 109 tonnes being used annually worldwide, which is comparable to the amount of iron and steel used globally. Wood has many uses such as the hubs and rims of traditional wheels (for which elm and oak, respectively, are used), longbows (made from yew) and cricket bats (made of willow). However wood is strongly anisotropic, highly susceptible to water damage and unable survive and to perform at high temperatures. There is also a large amount of variability in wood, as different growing conditions for trees of the same species lead to differing mechanical properties. It is possible to limit the anisotropy of wood by using plywood, which involves creating layers of wood with orthogonal “grains” (orientation of the tracheid/ fibre cell structure). The other limitations cannot so easily be overcome, limiting woods use as an engineering material with light specifications. (This is discussed in teaching and learning package). ![](figures/palm_tree_label_sml.jpg) Palm trees can survive very high winds, behaving well under bending moments. Looking at the strength – density materials selection chart: Alumina clearly outperforms biomaterials when loading a tie under tension, it has a high value of the merit index \({{{\sigma \_f}} \over \rho }\) and performs well for all merit indices shown on the above map. But the applicability of alumina is likely to be limited by its brittleness (low *Kc*) and the difficulty of forming into components. Aluminium alloys, titanium alloys, and particularly steel and polyethylene, have merit indices inferior to those of many biomaterials. This is due to their greater density and in polyethylenes case relatively low strength. This doesnt necessarily mean that biomaterials will be chosen over common engineering materials however. Other factors to be considered in choosing a material for an engineering application once merit indices have been compared include: * cost * resistance to creep * ease of manufacture * availability * resistance to corrosion * ability to function at a required temperature * energy efficiency Only when all these factors along with the merit indices and materials-selection maps have been considered, can the best material for a particular application be found. Materials-selection maps and merit indices can be used to compare broad ranges of different materials to discover roughly which few materials are most suitable. Each material would then need to be assessed in more detail to discover which material is truly best for a specific use. Summary = * In this TLP you have learnt how and why to use materials-selection maps, and about the common merit indices used in conjunction with these maps. Simple merit indices have been derived and used to analyse the difference in applicability of various materials. * Many biomaterials show interesting properties and can outperform common engineering materials. You will have discovered why certain biomaterials have evolved to show these properties and you will be able to think about why other biomaterials may have certain properties. * You should now be aware of how to choose a shortlist of materials suitable for a specific application, and be able to decide which material out of a selection is most suited to a particular purpose. However, you should also be aware of the limitations of merit indices and materials-selection maps for choosing a material and be able to think of other practical considerations involved. Questions = ### Quick questions*You should be able to answer these questions without too much difficulty after studying this TLP. If not, then you should go through it again!*1. Looking at the specific modulus - specific strength map for biomaterials, which material gives the greatest maximum elastic strain energy per unit mass? | | | | | - | - | - | | | a | cellulose | | | b | viscid silk | | | c | muscle | | | d | elastin | 2. For which engineering application does the best steel perform better than the best aluminium alloy? | | | | | - | - | - | | | a | Minimum elastic extension on loading a tie of given mass and length in tension. | | | b | Minimum deflection on bending a beam of given mass, with the cross-sectional area of the beam as a free parameter. | | | c | Greatest maximum elastic strain energy per unit volume | | | d | Strongest on loading a beam of a given mass in tension, with the cross-section of the beam as a free parameter. | 3. Why is it favourable for viscid silk to have a high value of the merit index ![](eqn/eqn_questions/Eqn.001.gif)? | | | | | - | - | - | | | a | so that the silk worm's cocoon will not be easily damaged during the silk worms transition to a moth. | | | b | to absorb elastically the kinetic energy of a fly hitting the web. | | | c | to enable silk worms to easily break out of their cocoon. | | | d | so that the fly gets entangled in the capture threads of the spider's web. | 4. Looking at the materials-selection chart toughness-Young's modulus for biomaterials, which of the listed biomaterials has the greatest fracture toughness, *Kc*? | | | | | - | - | - | | | a | Antler | | | b | Skin | | | c | Calcite | | | d | Cork | 5. Why would diamond not be used as a material that gives the minimum extension for loading a beam of given mass in tension? | | | | | - | - | - | | | a | It has low strength and so would break too easily. | | | b | It is too expensive. | | | c | It has a low value of the merit index . | | | d | It has too great a mass. | 6. Which of the listed commonly used engineering materials will show the minimum deflection on bending a beam of given mass with the cross-sectional area of the beam as the free parameter (i.e. which material has the maximum ![](eqn/eqn_questions/Eqn.003.gif) value)? | | | | | - | - | - | | | a | Aluminium alloys. | | | b | Steels. | | | c | Alumina. | | | d | Polyethylene (HDPE and LDPE). |### Deeper questions*The following questions require some thought and reaching the answer may require you to think beyond the contents of this TLP.*7. Derive the merit index for the maximum elastic strain energy per unit volume ![](eqn/eqn_questions/Eqn.004.gif). 8. (...continued from previous question) Now derive the merit index for the maximum elastic strain energy per unit mass ![](eqn/eqn_questions/Eqn.010.gif). Going further = ### References * Michael F. Ashby, *Materials Selection In Mechanical Design*, Pergamon Press, Second Edition, 1999. * U.G.K.Wegst and M.F.Ashby, *The Mechanical Efficiency of Natural Materials,* Philosophical Magazine, 21 July 2004, vol.84, No.21, 2167-2181. ### Websites * Why wood is the worlds most environmentally friendly building material. * An article on how spiders make their silk including pictures. * Article * Article and video, Spider Expert Cheryl Y. Hayashi, American Museum of Natural History
Aims On completion of this TLP you should: * understand the thermodynamic principles behind free-energy curves * understand how free-energy curves relate to equilibrium phase diagrams * be able to construct a binary phase diagram from cooling curves * be able to use phase diagrams to predict the composition and volume fraction of phases Introduction The phase diagram is a crucial part of metallurgy - it shows the equilibrium states of a mixture, so that given a temperature and composition, it is possible to calculate which phases will be formed, and in what quantities. As such it is very valuable to be able to construct a phase diagram and know how to use it to predict behaviour of materials. The main theory behind phase diagrams is based around the latent heat that is evolved when a mixture is cooled, and changes phase. This means that by plotting graphs of temperature against time for a variety of different compositions, it should be possible to see at what temperatures the different phases form. It is relatively easy to produce a rough binary phase diagram, as will be shown later in the package, but although it is quick to take readings for the top part of a phase diagram, it takes longer, and hence more sensitive equipment to monitor the changes that take place when a solid changes phase. A typical simple binary phase diagram is as follows: ![Schematic phase diagram for a binary system](images/diagram1.gif) Where L stands for liquid, and A and B are the two components and α and β are two solid phases rich in A and B respectively. The blue lines represent the liquidus and solidus lines, which are relatively simple to measure. The red lines involve a solid-to-solid transition, and so require much more sensitive equipment. However, there is also a lot of thermodynamic theory behind phase diagrams, which allows more problematic or more complex systems to be predicted, and this can lead to faster creation of phase diagrams, as it can take a long time to pick up all the stable phases in experiments, and there is not always the time available for such practical work. A crucial point to remember is that a phase diagram should always display the equilibrium phases, and so with cooler temperatures, these are hard to attain due to kinetic problems. Even at higher temperatures, there may be problems of having enough time for the solid to fully equilibrate as the system is cooling. Thermodynamics: Basic terms = ### Internal Energy, U The internal energy of a system is the sum of the potential energy and the kinetic energy. For many applications it is necessary to consider a small change in the internal energy, dU, of a system.dU = dq + dw = CdT - PdV dq = the heat supplied to a system dw = the work performed on the system C = heat capacity dT = change in temperature P = pressure dV = change in volume At constant volume,dU = CVdT ### Enthalpy, H Enthalpy is the constant pressure version of the internal energy. Enthalpy, H = U + PV. Therefore, for small changes in enthalpy,dH = dU + PdV + VdP. At constant pressure, dH = CPdT. ### Entropy, S Entropy is a measure of the disorder of a system. In terms of molecular disorder, the entropy consists of the configurational disorder (the arrangement of different atoms over identical sites) and the thermal vibrations of the atoms about their mean positions. A change in entropy is defined as, \[dS \ge \frac{{{\rm{d}}q}}{T}\] For reversible changes, i.e. changes under equilibrium conditions,dq = TdS For natural changes, i.e. under non-equilibrium conditions, dq < TdS ### Gibbs free energy, G The Gibbs free energy can be used to define the equilibrium state of a system. It considers only the properties of the system and not the properties of its surroundings. It can be thought of as the energy which is available in the system to do useful work.Free energy, G, is defined as,G = H - TS = U + PV - TS For small changes,dG = dH - TdS - SdT = VdP - SdT + (dq - TdS) For changes occurring at constant pressure and temperature,dG = dq - TdS Therefore, dG = 0 for reversible (equilibrium) changes, and dG < 0 for non-reversible changes. From this it is clear that G tends to a minimum at equilibrium. The Helmholtz free energy, F, is sometimes used instead of G, and is the equivalent of G for changes at constant volume. It is defined as, F = U - TS ### Thermodynamics of Solutions Consider a mechanical mixture of two phases, A and B. If this is then transformed into a single solution phase with A and B atoms distributed randomly over the atomic sites, then there will be, * An enthalpy change associated with interactions between the A and B atoms, ΔHmix * An entropy change, ΔSmix, associated with the random mixing of the atoms * A free energy of mixing, ΔGmix = ΔHmix - TΔSmix Assume that the system consists of N atoms: xAN of A and xBN of B, where, xA = fraction of A atoms and xB = (1 - xA) = fraction of B atoms ### Enthalpy of mixing In calculating ΔHmix it is assumed that only the potential energy term undergoes any significant change during mixing. This change arises from the interactions between nearest-neighbour atoms. Consider an alloy consisting of atoms A and B. If the atoms prefer like neighbours, A atoms will tend to cluster and likewise B atoms, so a greater number of A-A and B-B bonds will form. If the atoms prefer unlike neighbours a greater number of A-B bonds will form. If there is no preference A and B atoms will be randomly distributed.Let wAA be the interaction energy between A - A nearest neighbours, wBB that for B - B nearest neighbours and wAB that for A - B nearest neighbours. All of these energies are negative, as the zero in potential energy is for infinite separation between atoms. Let each atom of A and B have co-ordination number z. Therefore, the total number of nearest-neighbour pairs is Nz/2. Probability of A - A neighbours = xA2 Probability of B - B neighbours = xB2 Probability of A - B neighbours = 2xAxB For a solid solution the total interaction energy is,Hs - Us = Nz/2 (xA2 wAA + xB2 wBB + 2xAxB wAB) For pure A, HA = (Nz/2)wAA For pure B, HB = (Nz/2)wBB Hence the enthalpy of mixing is given by,ΔHmix = Hs - (xAHA + xBHB) = (Nz/2)xAxB (2wAB - wAA - wBB)We can define an interaction parameterW = (Nz/2)(2wAB - wAA - wBB) Therefore,ΔHmix = WxAxB If A-A and B-B interactions are energetically more favourable than A-B interactions then W > 0. So, ΔHmix > 0 and there is a tendency for the solution to form A-rich and B-rich regions. If A-B interactions are energetically more favourable than A-A and B-B interactions, W < 0, ΔHmix < 0, and there is a tendency to form ordered structures or intermediate compounds. Finally if the solution is ideal and all interactions are energetically equivalent, then W = 0 and ΔHmix = 0. ### Entropy of mixingPer mole of sites, this isΔSmix = kN (- xAlnxA - xBlnxB) (the of this result makes use of Stirling's approximation) where N = Avogadro's number, and kN = R, the gas constant. Hence,ΔSmix = R (- xAlnxA - xBlnxB) A graph of ΔSmix versus xA has a different form from ΔHmix. The curve has an infinite gradient at xA = 0 and xA = 1. The free energy of mixing is now given by,ΔGmix = ΔHmix - TΔSmix = xAxBW + RT (xA lnxA + xBlnxB) For W < 0, ΔGmix is negative at all temperatures, and mixing is exothermic. For W > 0, ΔHmix is positive and mixing is endothermic. Free energy curves For any phase the free energy, *G*, is dependent on the temperature, pressure and composition. ### Pure Substances For pure substances the composition does not vary and there is little dependence on pressure. Therefore the free energy varies greatest with temperature.The phase with the lowest free energy at a given temperature will be the most stable. The curves for the free energies of the liquid and solid phases of a substance have been plotted below. It shows that below the melting temperature the solid phase is most stable, and above this temperature the liquid phase is stable. At the melting temperature, where the two curves cross, the solid and liquid phases are in equilibrium. ![Diagram of free energy curves](images/diagram2.gif) ### Solutions Solutions contain more than one component and in these situations the free energy of the solution will become dependent on its composition as well as the temperature.It is shown above that the free energy of mixing is: Δ*G*mix = Δ*H*mix - *T*Δ*S*mix = *x*A*x*BW + *RT* (*x*A ln*x*A + *x*Bln*x*B) The shape of the Δ*G*mix curve is dependent on temperature . For the curve shown below the value of Δ*H*mix is positive, leading to a maximum on the curve at low temperatures. Δ*G*mix is always negative for low solute concentrations as the gradient of Δ*S*mix is infinite at *x*A = 0 and *x*A = 1. ![Graphs of free energy of mixing at varying temperatures](images/diagram3.gif) At high temperatures there is a complete solution and the curve has a single minimum. At low temperatures the curve has a maximum and two minima. In the composition range between the two minima (denoted by the dashed lines) a mixture of two phases is more stable than a single-phase solution. The free energy of a regular solid solution, Δ*G*sol, is the sum of the free energy of mixing Δ*G*mix and the free energy of fusion Δ*G*fus. ### Free energy of fusion When a liquid solidifies there is a change in the free energy of freezing, as the atoms move closer together and form a crystalline solid. For a pure component, this can be empirically calculated using Richard's Rule: Δ*G*fusion = - 9.5 (*T*m - *T*) *T*m = melting temperature *T* = current temperature Δ*G*fusion = 0 at the melting temperature of the component. Δ*G*fusion < 0 below the melting temperature of the component. Δ*G*fusion > 0 above the melting temperature of the component. In an alloy, if both the liquid and solid solutions are ideal then Δ*G*fusion for the alloy can be interpolated between the values for the two components.Now we can plot the free energy of a regular solid solution from the equation, Δ*G*sol = Δ*G*mix + Δ*G*fusion ![Graphs of free energy of fusion](images/diagram4.gif) ![Graphs of the free energy of fusion](images/diagram5.gif) Phase diagrams 1 Free energy curves can be used to determine the most stable state for a system, i.e. the phase or phase mixture with the lowest free energy for a given temperature and composition. Below is a schematic free-energy curve for the solid phase of an alloy. ![Schematic free-energy curve for the solid phase of an alloy](images/diagram6.gif)The solid shown could either exist as a mixture or as a homogeneous solution of A and B. The figures below show that an alloy of composition C can exist in different configurations with differing free energies. In the first figure (below) the free energy of unmixed A and B is shown as the diagonal black line. The free energy of this mixture at composition C is shown as a red point. ![Schematic free-energy curve for the solid phase of an alloy](images/diagram7.gif)The system can reduce its free energy by existing as a mixture of two phases ![Schematic free-energy curve for the solid phase of an alloy](images/diagram8.gif)Though the system has reduced its free energy from that of the mixture, the most stable configuration for the system is a solid solution. This allows the free energy of the system to sit on the free energy curve. ![Schematic free-energy curve for the solid phase of an alloy](images/diagram9.gif) For most systems there will be more than one phase and associated free-energy curve to consider. At a given temperature the most stable phase for a system can vary with composition. While the system could consist entirely of the phase which is most stable at a given composition and temperature, if the free energy curves for the two phases cross, the most stable configuration may be a mixture of two phases with compositions differing from that of the overall system. The total free energy of the system in any given two-phase configuration can be found by linking the two phases in question with a straight line on a free-energy plot. ![Schematic free-energy plots for liquid and solid phases](images/diagram10.gif) ![Schematic free-energy plots for liquid and solid phases](images/diagram11.gif) Taking a line that is a common tangent to the two free-energy curves produces the lowest possible free energy for the system as a whole. Where the line meets the free energy curves defines the composition of each phase. ![Schematic free-energy plots for liquid and solid phases](images/diagram12.gif)For positions where it is not possible to draw a common tangent between the two free-energy curves the system will sit entirely in the phase with the lowest free energy. The borders between the single- and two-phase regions mark the positions of the solidus and liquidus on the phase diagram. ![Schematic free-energy plots for liquid and solid phases](images/diagram13.gif)When the temperature is altered the compositions of the solid and liquid in equilibrium change and build up the shape of the solidus and liquidus curves on a phase diagram. Below, a binary system can be seen along with the free-energy curves for the liquid and solid phases at a range of temperatures shown on the phase diagram. | | | | - | - | | Schematic phase diagram | Schematic free-energy curves for solid and liquid | | Schematic free-energy curves for solid and liquid | | Schematic free-energy curves for solid and liquid | Schematic free-energy curves for solid and liquid | Schematic free-energy curves for solid and liquid | Phase diagrams 2 The free-energy curves and phase diagrams discussed in Phase Diagrams 1 were all for systems where the solid exists as a solution at all compositions and temperatures. In most real systems this is not the case. This is due to a positive Δ*H*mix caused by unfavourable interactions between unlike neighbour atoms. As the temperature is reduced the Δ*H*mix term becomes more significant and the curve turns upward at intermediate compositions, resulting in a curve with two minima and one maximum as described earlier. A common tangent can then be drawn between the two minima showing that the system can reduce its free energy through existing as a mixture of two distinct phases. The free energy of a system of composition Co can be minimised by existing as a mixture of two solid phases of composition C1 and C2: ![Schematic free-energy curve for a mixture of two solid phases](images/diagram20.gif)This effect can result in a system which, though single-phase upon solidification, will separate into two solid phases on cooling (e.g. Cr-W). Another possible result is that the free-energy curve for the liquid will intersect the up turned section of the free-energy curve for the solid before the temperature is high enough to induce the formation of a solid solution. As the temperature is increased, the free-energy curve for the liquid moves downward relative to the solid curve and reaches a position where it is possible to link two parts of the solid free energy curve and one part of the liquid free energy curve with a common tangent. At this temperature three phases are in equilibrium. Here the system is at the eutectic temperature and three phases can be joined by a common tangent: ![Schematic free-energy curves](images/diagram21.gif) This is known as the eutectic temperature. At this temperature there will be a composition which solidifies at a single temperature through the co-operative growth of the two solid phases. This is the eutectic composition. It is this composition which will exhibit the lowest melting point for the system. At temperatures above that of the eutectic there will be two common tangents producing two two-phase regions at the same temperature. The two different solid phases are commonly labeled as α and β ![Schematic free-energy curves](images/diagram22.gif)Eutectic systems therefore have a liquidus which contains a V to the eutectic point where it meets the eutectic invariant-reaction line. Here is an example of a eutectic phase diagram. α and β are both solid phases. ![Eutectic phase diagram](images/diagram23.gif) The two-phase solid region on the phase diagram will consist of a mixture of eutectic and either α or β phase depending on the whether the alloy composition is hypoeutectic or hypereutectic. The constitution of an alloy under equilibrium conditions can be found from its phase diagram. This will be discussed in a later section. Interpretation of cooling curves The melting temperature of any pure material (a one-component system) at constant pressure is a single unique temperature. The liquid and solid phases exist together in equilibrium only at this temperature. When cooled, the temperature of the molten material will steadily decrease until the melting point is reached. At this point the material will start to crystallise, leading to the evolution of latent heat at the solid liquid interface, maintaining a constant temperature across the material. Once solidification is complete, steady cooling resumes. The arrest in cooling during solidification allows the melting point of the material to be identified on a time-temperature curve. ![Cooling curve for a pure material](images/diagram24.gif) Most systems consisting of two or more components exhibit a temperature range over which the solid and liquid phases are in equilibrium. Instead of a single melting temperature, the system now has two different temperatures, the liquidus temperature and the solidus temperature which are needed to describe the change from liquid to solid. The liquidus temperature is the temperature above which the system is entirely liquid, and the solidus is the temperature below which the system is completely solid. Between these two points the liquid and solid phases are in equilibrium. When the liquidus temperature is reached, solidification begins and there is a reduction in cooling rate caused by latent heat evolution and a consequent reduction in the gradient of the cooling curve. Upon the completion of solidification the cooling rate alters again allowing the temperature of the solidus to be determined. As can be seen on the diagram below, these changes in gradient allow the liquidus temperature *T*L, and the solidus temperature *T*S to be identified. ![Cooling curve for system of two components](images/diagram25.gif) When cooling a material of eutectic composition, solidification of the whole sample takes place at a single temperature. This results in a cooling curve similar in shape to that of a single-component system with the system solidifying at its eutectic temperature. ![Cooling curve for material of eutectic composition](images/diagram26.gif)When solidifying hypoeutectic or hypereutectic alloys, the first solid to form is a single phase which has a composition different to that of the liquid. This causes the liquid composition to approach that of the eutectic as cooling occurs. Once the liquid reaches the eutectic temperature it will have the eutectic composition and will freeze at that temperature to form a solid eutectic mixture of two phases. Formation of the eutectic causes the system to cease cooling until solidification is complete. The resulting cooling curve shows the two stages of solidification with a section of reduced gradient where a single phase is solidifying and a plateau where eutectic is solidifying. ![Cooling curve for hypoeutectic or hypereutectic alloys](images/diagram27.gif) By taking a series of cooling curves for the same system over a range of compositions the liquidus and solidus temperatures for each composition can be determined allowing the solidus and liquidus to be mapped to determine the phase diagram. Below are cooling curves for the same system recorded for different compositions and then displaced along the time axis. The red regions indicate where the material is liquid, the blue regions indicate where the material is solid and the green regions indicate where the solid and liquid phases are in equilibrium. ![Series of cooling curves for the same system over a range of compositions](images/diagram28.gif)By removing the time axis from the curves and replacing it with composition, the cooling curves indicate the temperatures of the solidus and liquidus for a given composition. ![Series of cooling curves for the same system over a range of compositions, with time axis replaced by composition axis](images/diagram29.gif) This allows the solidus and liquidus to be plotted to produce the phase diagram:![Series of cooling curves for the same system over a range of compositions, with time axis replaced by composition axis, resulting in phase diagram for system](images/diagram30.gif) Experiment and results The simplest way to construct a phase diagram is by plotting the temperature of a liquid against time as it cools and turns into a solid. As discussed in , the solidus and liquidus can be seen on the graphs as the points where the cooling is retarded by the emission of latent heat. ### Method An experiment can be performed to get a rough idea of a phase diagram by recording cooling curves for alloys of two metals, in various compositions. The alloy chosen for this example is bismuth-tin, both of which metals have low melting points, and so can be heated and cooled more quickly and easily in the lab. So that the experiment could be performed in a reasonable time, 11 compositions were used, from pure bismuth to pure tin in steps of 10%. All the compositions were measured in weight percent. | | | | - | - | | | | | Heating apparatus | Crucible | | (Click on image to view a larger version) | The apparatus pictured was set up, with the maximum temperature to be attained of around 300°C. The sample in the crucible cooled, and the temperature was recorded at regular intervals. In the case of the results produced here, readings were made every two seconds, using a computer. However, it is adequate to take readings every 15 seconds manually. ### Results The procedure was repeated for all 11 compositions, and the following results recorded: ![Graph showing the temperature of several tin-bismuth alloys cooling over a period of time](images/graph1.gif) These lines can be made clearer by spacing them along the time axis, so that the alloys with a higher bismuth content appear further to the right, as shown below. ![Graph showing the temperature of several tin-bismuth alloys cooling over a period of time, lines displaced along the time axis](images/graph2.gif) From these graphs it is possible to pick out the changes in gradient which allow a simplified phase diagram to be drawn. On the diagram with the displaced time-temperature plots, the changed gradients, i.e. the parts where some of the liquid is solidifying, are coloured white. These show the top of the liquidus and the bottom of the solidus. If these curves are now straightened out, and the colours kept, with the white representing the solidification, it is possible to construct a basic phase diagram, as was shown for an isomorphous system in . This is shown below, where the white line is part of the phase diagram, constructed from the cooling curves, and the thin grey line is the actual phase diagram, found from many experiments over a long period of time. ![Graph comparing phase diagram constructed from cooling curves with that found from many experiments](images/graph3.gif) ### Analysis It can be seen from the diagram above that the recorded phase diagram is roughly 15°C lower than it should be, and that some of the measurements of the liquidus are not in the expected places. The lowering of the diagram is due to the thermocouple being contained in a glass rod, rather than actually touching the alloy. The occasional unexpected liquidus temperatures are probably due to the compositions being slightly inaccurate. It is also worth noting that in this projected phase diagram, it was not possible to draw in a proper solidus on the right hand side, as none of the compositions near pure bismuth showed evidence of a solidus. As such, a dotted line has been plotted as an estimate of where it would go. The microstructure of the alloy changes with composition. This can be seen in scanning electron microscope (SEM) images taken from each of the compositions used in the experiment above.Interactive Sn-Bi phase diagram and SEM images The lever rule If an alloy consists of more than one phase, the amount of each phase present can be found by applying the lever rule to the phase diagram. The lever rule can be explained by considering a simple balance. The composition of the alloy is represented by the fulcrum, and the compositions of the two phases by the ends of a bar. The proportions of the phases present are determined by the weights needed to balance the system. ![Diagram illustrating the lever rule](images/diagram31.gif) So, fraction of phase 1 = (C2 - C) / (C2 - C1) and, fraction of phase 2 = (C - C1) / (C2 - C1). ### Lever rule applied to a binary system ![Schematic phase diagram for a binary system](images/diagram32.gif) #### Point 1 At point 1 the alloy is completely liquid, with a composition C. Let C = 65 weight% B. #### Point 2 At point 2 the alloy has cooled as far as the liquidus, and solid phase β starts to form. Phase β first forms with a composition of 96 weight% B. The green dashed line below is an example of a *tie-line*. A tie-line is a horizontal (i.e., constant-temperature) line through the chosen point, which intersects the phase boundary lines on either side. ![Part of a phase diagram](images/diagram33.gif) #### Point 3 A tie-line is drawn through the point, and the lever rule is applied to identify the proportions of phases present. ![Part of a phase diagram](images/diagram34.gif) Intersection of the lines gives compositions C1 and C2 as shown. Let C1 = 58 weight% B and C2 = 92 weight% B So, fraction of solid β = (65 - 58) / (92 - 58) = 20 weight%and fraction of liquid = (92 - 65) / (92 - 58) = 80 weight% #### Point 4 ![Part of a phase diagram](images/diagram35.gif) Let C3 = 48 weight% B andC4 = 87 weight% B So fraction of solid β = (65 - 48) / (87 - 48) = 44 weight%.As the alloy is cooled, more solid β phase forms.At point 4, the remainder of the liquid becomes a eutectic phase of α+β andfraction of eutectic = 56 weight% #### Point 5 ![Part of a phase diagram](images/diagram36.gif) Let C5 = 9 weight% Band C6 = 91 weight% B So fraction of solid β = (65 - 9) / (91 - 9) = 68 weight%and fraction of solid α = (91 - 65) / (91 - 9) = 32 weight%. Modern uses = Phase diagrams are not just an abstract construction - they have applications in the real world, in deciding which compositions to use. A major use of eutectics, or near eutectics is in solder. In plumbing, solder is used to join copper pipes together, producing a waterproof seal. For many years a lead-tin alloy has been used, as this has a low melting point, especially at eutectic. However, although a low melting point is sought, it is useful to be able to move the pipes around slightly when they are in place and the solder is solidifying. This means that a eutectic should not be used, as although it has the lowest melting point for the alloy system, it will all solidify at once, leaving little room for error. Instead it is useful to use an alloy whose composition deviates slightly from that of the eutectic, so that the solidification will take longer, making the solder easier to use, despite the higher temperatures, and so resulting in a better join. Electrical solder uses a similar alloy to join parts of an electronic circuit together. In the case of a standard electrical solder, the eutectic is used, as high temperatures are to be avoided, and it is useful for the solder to solidify all at once. In more modern soldering applications, such as a ball grid array which joins the chip to some circuit boards, the eutectic is still used. However, there are also situations where a slightly off-eutectic can be used. If there are several processing steps, it is useful to start off with a higher melting point alloy, and only use the eutectic in the final soldering stage. This allows the previous solders to stay in place when the heating takes place on the later stages. Modern solders have moved away from lead-based alloys, because of environmental considerations, and been replaced with new alloys. In the case of plumbing, there is a tendency to use plastic piping instead of copper piping, so there is less need for solder in this industry. In the electronics industry, lead-tin is being replaced by copper-, tin-, and silver-based systems, which have less environmental problems, but can still be used to create low melting points and flexibility which the lead-tin systems provide. Lots of the modern research on solders relates to alternative systems, and characterising them for use. Part of this characterisation involves the production of phase diagrams to allow good choice of composition for the right properties. In recent times, it has become necessary to mix several elements in order to improve properties of the materials. It is therefore useful to create phase diagrams which involve three elements, called "ternary" diagrams. These are more complicated than two element "binary" phase diagrams, but allow improved optimisation, and hence can produce better results. Further considerations ### Other methods for constructing phase diagrams Although the easiest way to investigate phase transformations is by the use of time temperature cooling curves, there are many ways to investigate these changes. This is most helpful when observing solid to solid phase changes, as these take longer to reach equilibrium, and so the cooling of a material must take place at a slower rate in order to get accurate results. Unfortunately, as the rate of cooling decreases, it gets harder to detect the latent heat being released, and so other methods must be sought. The thinking behind most of the methods is the same; as the material changes phase, there will be changes in its physical properties. As such, the phase change can be detected by observing a property as the temperature is reduced. Examples of this are density, electrical or thermal resistance, optical properties, Young's modulus and damping efficiency. Another frequently used measure is that of interplanar spacings in the crystalline phases, which can be measured using X-ray diffractometry. These techniques have been used to map out phase diagrams, but they are always time consuming, especially during the solid transformations. ### Dangers in interpretation of phase diagrams An essential point to remember in phase diagrams is that during normal or fast cooling, results may not be as expected in the diagram. Both the theory and the experiments to construct phase diagrams rely on the assumption that the system is in equilibrium, which is rarely the case, as this only occurs properly when the system is cooled very slowly. In order to reach full equilibrium, the solute in the solid phases must stay completely uniform throughout the cooling. However, in most systems, if the system is not cooled quickly, the phase diagram will give fairly accurate results. In addition, near the eutectic, the results become even closer to the phase diagram, as the liquid solidifies at nearly the same time. The non equilibrium conditions can sometimes be of benefit however, as microstructures at higher temperatures in a phase diagrams may sometimes be preserved to lower temperatures by fast cooling, i.e. quenching, or unstable microstructures may occur during fast cooling which can be useful when hardening an alloy. Summary = In this package, the theoretical background to phase diagrams has been shown, as well as a method for constructing part of a diagram. Explanation has been given of how to use a phase diagram, and how it applies to real systems, and to understanding solidification. It should now be appreciated that phase diagrams are a valuable resource in predicting behaviour of alloys during solidification, and the microstructures which will be produced. However, there should also be an understanding that in normal cooling conditions although phase diagrams are generally fairly accurate, they are not always exact, and if diffusion is slow, there may be unexpected results. Questions = ### Quick questions*You should be able to answer these questions without too much difficulty after studying this TLP. If not, then you should go through it again!*1. What is the effect on the shape of the free-energy curve for a solution if its interaction parameter is positive? | | | | | - | - | - | | | a | Produces a curve which has one minimum | | | b | Produces a curve with no minimum and one maximum | | | c | Produces a curve which contains a maximum at low T | | | d | Produces a curve which contains a maximum at high T | 2. In terms of interatomic bonding, what does a negative interaction parameter represent? | | | | | - | - | - | | | a | That A-A and B-B bonds are more favoured | | | b | That A-B bonds are more favoured | | | c | That A-A bonds are more favourable than B-B bonds | 3. Cooling curve for a binary system:![Cooling curve for a binary system](images/diagram37.gif) | | | | | - | - | - | | | a | One phase | | | b | Two phases | | | c | Three phases | | | d | Four phases | 4. What is a hypoeutectic alloy? | | | | | - | - | - | | | a | An alloy which has solute content greater than that of the eutectic. | | | b | An alloy which has a solute content lower than that of the eutectic. | | | c | An alloy whose solute content is such that it contains no eutectic. | | | d | An alloy whose final microstructure is wholly eutectic. | 5. Look at the following Al-Si phase diagram. (Place the mouse over the diagram to determine the temperature and composition at any point.)For an Al 5 wt% Si alloy what will be the composition of the solid in equilibrium with the liquid at 600°C? | | | | | - | - | - | | | a | 1 wt% | | | b | 2 wt% | | | c | 9 wt% | | | d | 13 wt% | 6. Look at the following Cu-Al phase diagram. (Place the mouse over the diagram to determine the temperature and composition at any point.)What will be the relative weight fraction of CuAl2 (θ) in a Al 15wt% Cu alloy at its eutectic temperature? | | | | | - | - | - | | | a | 66.6% | | | b | 33.3% | | | c | 50% | | | d | 75% |### Deeper questions*The following questions require some thought and reaching the answer may require you to think beyond the contents of this TLP.*7. Using the following data, calculate the volume fraction of the beta phase and eutectic at the eutectic temperature, for an alloy of composition 75 wt% Ag. Assume equilibrium conditions.At eutectic temp:eutectic composition = 71.9 wt% Ag maximum solid solubility of Cu in Ag = 8.8 wt% Cu density Ag = 10 490 kg/m3 density Cu = 8 920 kg/m3 8. Composition vs temperature phase diagrams exist for the combinations of three elements A, B and C (i.e. the three phase diagrams A-B, A-C and B-C). How might they be arranged in three-dimensional space to construct a "ternary" phase diagram for a system containing A, B and C?### Open-ended questions*The following questions are not provided with answers, but intended to provide food for thought and points for further discussion with other students and teachers.*9. Under what conditions could the compositions of the phases present differ from that predicted in the phase diagram? Going further = ### Books * Porter, D. A. and Easterling K, *Phase Transformations in Metals and Alloys*, 2nd edition, Routledge, 1992. * Smallman, R. E, *Modern Physical Metallurgy*, Butterworth, 1985. * John, V, *Understanding Phase Diagrams*, Macmillan,1974. ### Websites * An Adobe Acrobat file authored by Professor P.J.M. Monteiro at the University of California, Berkeley. * A MATTER module with storyboard by Professor Bill Clyne providing an in-depth look at processes involved in solidification.
Aims On completion of this TLP you should: * understand the phenomenon of birefringence * know how materials scientists use photoelasticity * be able to distinguish between isochromatic and isoclinic fringes * be able to relate stress patterns in polycarbonate rods to the bending rig used and the deformation in the rod * know and be able to use the Stress-Optic Law Introduction The photoelastic effect (alternatively called the piezo-optical effect) is the change of refractive index caused by stress. Applications of photoelasticity involve applying a given stress state to a model and utilising the induced birefringence of the material to examine the stress distribution within the model. The magnitude and direction of stresses at any point can be determined by examination of the fringe pattern, and related to the studied structure. Two different types of fringes can be observed in photoelasticity: isochromatic and isoclinic fringes. Isochromatic fringes are lines of constant principal stress difference, (σP - σQ). If the source light is monochromatic these appear as dark and light fringes, whereas with white light illumination coloured fringes are observed. The difference in principal stresses is related to the birefringence and hence the fringe colour through the Stress-Optic Law. Isoclinic fringes occur whenever either principal stress direction coincides with the axis of polarisation of the polariser. Isoclinic fringes therefore provide information about the directions of the principal stresses in the model. When combined with the values of (σP - σQ) from the photoelectric stress pattern, isoclinic fringes provide the necessary information for the complete solution of a two-dimensional stress problem. A standard plane polariscope shows both isochromatic and isoclinic fringes, and this makes quantitative stress analysis difficult. Isoclinic fringes can be removed by using a circular polariser. Image capturing and digital processing techniques also allow for the separation of the isoclinic and isochromatic fringe patterns. Isoclinic fringes can be observed by reducing the number of isochromatic fringes through either applying a smaller load or by using a material with a high material fringe constant. The two types of fringes can be distinguished by rotating the specimen in a plane polariscope. Isoclinic fringes will vary in intensity during rotation whereas isochromatic fringes will be invariant to the orientation of the specimen with respect to the polariser and analyser. ![Animated image showing plane polariscope images of the same ruler in different orientations](images/rotating-ruler.gif) Plain polariscope images of the same ruler in different orientations. Note the varying intensity of the isoclinic fringes. History =Photoelasticity was developed at the turn of the twentieth century. The early work of E Coker and L Filon at the University of London enabled photoelasticity to be developed rapidly into a viable technique for qualitative stress analysis. It found widespread use in many industrial applications, as in two dimensions it exceeded all other techniques in reliability, scope and practicability. No other method had the same visual appeal or covered so much of the stress pattern. The development of digital polariscopes using LEDs and laser diodes enables continuous online monitoring of structures and dynamic photoelasticity. Developments in image processing allow the stress information to be extracted automatically from the stress pattern. The development of rapid prototyping using stereolithography allows the generation of accurate three-dimensional models from a liquid polymer, without the use of the traditional moulding method. The advent of superior computer processing power has revolutionised stress analysis. Finite element modelling (FEM) has become the dominant technique, overshadowing many traditional techniques for stress analysis. Despite FEM advances, photoelasticity, one of the oldest methods for experimental stress analysis, has been revived through recent developments and new applications. When using FEM, it is crucial to assess the accuracy of the numerical model, and ultimately this can only be achieved by experimental verification. For example, a threaded joint experiences non-uniform contact, which is difficult to incorporate accurately into a computer model. Idealised models therefore tend to underestimate the actual maximum stress concentration at the root of the thread. Photoelasticity therefore remains a major tool in modern stress analysis. Optical anisotropy in polymers For a polycarbonate with a relatively simple carbon-carbon backbone, the refractive index is larger for vibrations of the electric field in the light wave (the electric vector) parallel to the axis of the chain. The applied stress causes alignment of the random chains and the inherent uniaxial anisotropy of the chain structure leads to 'stress-induced birefringence'. Birefringence in polycarbonate specimens arises due to two effects, non-random chain alignments and residual strains. To minimise pre-existing strains the polycarbonate specimens used in these demonostrations were annealed to remove any strains incorporated during their fabrication. ### **Question** Study the isoclinic fringes from both the protractor and ruler. How do you think these articles were manufactured, and does the stress distribution support this? (Answer below) | | | | - | - | | Photograph of a protractor viewed through a plane polariscope Protractor viewed through plane polariscope. | Photograph of a ruler viewed through a plane polariscope Ruler viewed through plane polariscope. | ### **Answer** Common processing routes such as injection moulding and extrusion lead to a residual stress distribution. Plastic rulers and protractors have residual strains due to their production by either extrusion or injection moulding. When observed under crossed polars they display birefringence, which enables the point of injection to be determined. ### Induced optical anisotropy In response to an applied stress a substance may change its dielectric constant and consequently, in transparent materials, change its refractive index. For an initially 'isotropic material, when a tensile stress is applied the material will become uniaxial with the optic axis parallel to the applied stress. In the general case, and where the stresses are applied in a plane, the optical birefringence will be proportional to the difference between the two (orthogonal) principal stresses in the plane. This is called the *Stress-Optic Law*. The constant of proportionality is known as the *stress-optical coefficient*. Light transmitted by an anisotropic material (See also the TLP "") When monochromatic light is incident on the polariser, only the component of light with an electric vector parallel to the axis of the polariser will be allowed to pass through. When the plane polarised light arrives at the specimen it is refracted and, if the material of the specimen is anisotropic, it is split into two separate waves, one vibrating parallel to one permitted vibration direction and the other wave parallel to the other (orthogonal) permitted vibration direction. The velocities of these waves will be determined by the relevant refractive indices, which will be different for the two directions and therefore the waves will become progressively out of phase as they pass through the material. The phase difference can alternatively be expressed in terms of the *optical path difference*, the distance that progressively separates points on the two waves that coincided initially. Upon emerging, the two waves recombine; however the exact way they recombine will depend on the phase difference between, which depends on the difference between the two refractive indices, the *birefringence*, Δn, and the distance travelled by the light through the specimen. In general the resultant wave will have a component of its electric vector parallel to the analyser direction. Plane polarised light has its electric vector vibrating along one direction, the *polariser direction*. When a material is orientated so that one of the permitted vibration direction lies parallel to the polariser direction, the light travels through the specimen without change in its polarisation state and therefore emerges from the specimen with its electric vector still parallel to the polariser direction and so perpendicular the the analyser direction. This light will not pass through the analyser. These settings are known as extinction positions and produce isoclinic fringes, fringes which occur wherever either principal stress direction coincides with the polariser direction. The transmitted intensity will also be zero when the optical path difference is an integral number of wavelengths (the phase difference is an integral multiple of 2π). In this case, the beams recombine to give a beam with the same polarisation state as the incident beam, i.e. with the electric vector parallel to the polariser direction, and hence the transmitted intensity is zero. We have seen that an applied stress can result in a change in the refractive index of a transparent substance. If a general system of stresses is applied in a plane, the optical birefringence, Δ*n,* produced will be proportional to the difference, Δσ between the two principal stresses in the plane. We can define the stress-optical coefficient *C*, such that> > |Δ*n*| = CΔσ > > > > For a sample of uniform thickness, regions in which Δσ [or equivalently (σP - σQ)] is constant show the same interference colour when viewed between crossed polars. Contours of constant principal stress difference are therefore observed as isochromatic lines. In order to determine the directions of the principal stress it is necessary to use isoclinic lines as these dark fringes occur whenever the direction of either principal stress aligns parallel to the analyser or polariser direction. Experiment In this experiment we investigate photoelasticity, the phenomenon of inducing birefringence in a substance through the application of a stress system. We investigate birefringence induced in polycarbonate when subjected to three and four point bending. Three different specimens are investigated: a plain bar, a bar with an edge notch, and a bar with a central hole. This will give us an appreciation of the effects of geometric discontinuities on the stress state. Some calculations will then be made using the Stress-Optic Law. Simple apparatus is used to deform the polycarbonate specimens. The apparatus comprises an aluminium base fitted with two cylindrical stops and a central groove into which a brass slide may be slotted depending on the stress-state to be applied. For three point bending a slide with a single point is used and for four point bending a slide with two well-separated points. A screw attached to the base allows the slides to be moved normal to the specimen, which is placed against the two cylindrical stops, subjecting the strip to stress. The apparatus is orientated between crossed polars so that the length of the specimens is in the 45° position, i.e. at 45° to the polariser direction. | | | | - | - | | Diagram of a 3-point bending rig | Diagram of a 4-point bending rig | | 3-point bending rig | 4-point bending rig | Stress patterns: 3-point bending ### Plain bar The stress originates in the centre of the long edges of the bar, unsurprisingly as these are the parts of the bar furthest from the neutral axis and experience the greatest deflection upon bending. As the load is slowly increased the stressed region moves laterally along the bar as well as in towards the neutral axis. | | | | - | - | | Photograph of an annealed bar undergoing 3-point bending under polarised light | Photograph of an annealed bar undergoing 3-point bending under polarised light | | Annealed bars in 3-point bending rigs | ### Bar with notch The stress pattern is symmetric about the notch. The stress pattern is similar to the lobes which radiate from a crack to indicate the regions of plastic deformation. There are dumbbells above and below the crack. The notch was lined up as to be exactly opposite the point of load. Upon initial loading stress develops from either side and slowly builds up, reducing the dark area of no stress until there is only a very small region in the centre of the bar that is not stressed. | | | | - | - | | Photograph of an annealed notched bar undergoing 3-point bending under polarised light | Photograph of an annealed notched bar undergoing 3-point bending under polarised light | | Annealed, notched bars in 3-point bending rigs | ### **Bar with hole** In this bar the neutral axis is interrupted by a circular hole. The stress fields are deflected by this hole, as demonstrated by the perturbations in the isoclines in the vicinity of the hole. Note the stress concentration that has built up at the edges of the hole. | | | - | | Photograph of an annealed bar with hole undergoing 3-point bending under polarised light | | Annealed bar with hole in a 3-point bending rig | Stress patterns: 4-point bending ### Plain bar The bar within the inner loading span of the four point bending arrangement experiences a pure bending moment. Within this region the stress is parallel to the long edges of the bar but its magnitude varies across the bar from a maximum tensile stress at the outer (convex) edge, through zero on the neutral axis to a maximum compressive stress at the inner (concave) edge. Between the two inner loading points on the bar is a region of pure bending. In the bar the longitudinal stress is a principal stress, there are no vertical or horizontal shear stresses and no transverse normal stresses. Therefore, if at a point P a distance *y* from the neutral axis there exists a stress which would cause a phase difference of one wave length, then all points on the horizontal line through P parallel to the longitudinal axis of the bar would cause the same phase difference. When the applied load is increased the first areas to show photoelastic effects are the extreme edges of the sample, which are obviously under the greatest stress. As the stress is increased bands originate at the edges of the sample and travel inwards towards the neutral axis. The spacing between these bands becomes narrower as the load is increased but the bands remain straight and parallel to the neutral axis. | | | | - | - | | Photograph of an annealed bar undergoing 4-point bending under polarised light | Photograph of an annealed bar undergoing 4-point bending under polarised light | | Annealed bars in 4-point rigs | ### Bar with notch The general stress distribution is similar to the normal specimen except in the vicinity of the notch where the stress pattern is distorted. The notch acts as a stress concentrator, focussing the stress at its tip. The concentration of stress is local because a few crack lengths away the stress returns to its pure bending distribution. Around the notch the isochromatic fringes bunch up indicating a stress concentration there. Around the notch the principal stress directions are altered and no longer lie parallel to the longitudinal and transverse directions of the specimen. | | | | - | - | | Photograph of an annealed and notched bar undergoing 4-point bending under polarised light | Photograph of an annealed and notched bar undergoing 4-point bending under polarised light | | Annealed and notched bars in 4-point bending rigs | ### **Bar with hole** As the applied load is increased, the bar behaves similarly to the plain sample. The hole is on the neutral axis, and therefore in the least stressed position. As the stress increases, the isoclinic fringes approach the edges of the hole, and for high stresses isochromatic fringes representing a high stress concentration can be seen around the edges of the hole, particularly in the transverse positions. | | | | - | - | | Photograph of an annealed bar with hole undergoing 4-point bending under polarised light | Photograph of an annealed bar with hole undergoing 4-point bending under polarised light | | Annealed bars with holes in 4-point bending rigs | Activities Sketch the pattern of isochromatic regions. Which of the isochromatic regions corresponds to the unstressed region of the bar? To determine the birefringence use a , whose axes are retardation (birefringence times thickness) versus the thickness of the specimen. On the chart are plotted lines of constant birefringence. The thickness of the bar was 3.0 mm. Use the Michel-Levy chart to determine the optical path difference at the midpoints of the two long edges of the bar and deduce the optical path differences arising from the stress at these points. Hence estimate the stress at these points using the Stress-Optic Law nQ - nP  =  *C*(σP - σQ). The principal stresses, σP and σQ, in the plane section are aligned parallel to orthogonal directions P and Q; nP is the refractive index for the vibration direction parallel to P and nQ is the refractive index for the vibration direction parallel to Q. The stress-optical coefficient of polycarbonate is -78.0 × 10-12 Pa-1. Video clips =### 3-point bending #### Normal bar | | | | | - | - | - | | Your browser does not support the video tag. Annealed bar in 3-point bending rig under a circular polariscope | Your browser does not support the video tag. Close up of annealed bar in 3-point bending rig under a circular polariscope | Your browser does not support the video tag. Abaqus-generated simulation of a bar undergoing 3-point bending |#### Bar with Notch | | | | - | - | | Your browser does not support the video tag. Annealed bar with notch in 3-point bending rig under a circular polariscope | Your browser does not support the video tag. Abaqus-generated simulation of a bar with notch undergoing 3-point bending |#### Bar with Hole | | | | - | - | | Your browser does not support the video tag. Annealed bar with hole in 3-point bending rig under a circular polariscope | Your browser does not support the video tag. Abaqus-generated simulation of a bar with hole undergoing 3-point bending |### 4-point bending #### Normal bar | | | | | - | - | - | | Your browser does not support the video tag. Annealed bar in 4-point bending rig under a circular polariscope | Your browser does not support the video tag. Close up of annealed bar in 4-point bending rig under a circular polariscope | Your browser does not support the video tag. Abaqus-generated simulation of a bar undergoing 4-point bending |#### Bar with Notch | | | | - | - | | Your browser does not support the video tag. Annealed bar with notch in 4-point bending rig under a circular polariscope | Your browser does not support the video tag. Abaqus-generated simulation of a bar with notch undergoing 4-point bending |#### Bar with Hole | | | | - | - | | Your browser does not support the video tag. Annealed bar with hole in 4-point bending rig under a circular polariscope | Your browser does not support the video tag. Abaqus-generated simulation of a bar with hole undergoing 4-point bending | Summary = In some transparent materials the application of a stress changes the refractive index for light travelling through that material.  If the magnitude of the applied stress is different in different directions in the material the refractive index will also vary with orientation.  This “Photoelastic Effect” has useful applications, for example in observing the variations in magnitude and orientation of the stresses in samples.  Some examples of such observations have been presented and the underlying science required to understand them has been introduced.  In particular the concept of “permitted vibration directions”, each having a different refractive index for light travelling through an anisotropic material has been presented and the resulting “phase difference” (or equivalently “optical path difference”) has been identified as providing the key to understanding the patterns of fringes observed in the stressed samples.  A fuller discussion of the propagation of light through anisotropic materials is presented in the TLP “”. Questions = ### Quick questions*You should be able to answer these questions without too much difficulty after studying this TLP. If not, then you should go through it again!*1. Which of the following is the best definition of *isotropic*? | | | | | - | - | - | | | a | Invariant with respect to time | | | b | Invariant | | | c | Invariant with respect to all material properties | | | d | Invariant with respect to direction | 2. How many permitted vibration directions does plane polarised light have? | | | | | - | - | - | | | a | None | | | b | Two | | | c | One | | | d | Infinite number | 3. What does Δ*n* represent in the equation for the Stress-Optic Law? | | | | | - | - | - | | | a | Thickness of material | | | b | Stress-Optical coeffcient | | | c | Birefringence | | | d | Stress | 4. What are the quantities on the axes of a Michel-Levy chart? | | | | | - | - | - | | | a | Retardation against thickness of specimen | | | b | Birefringence against thickness of specimen | | | c | Birefringence against time | | | d | Retardation against time | 5. What is the difference between isochromatic and isoclinic fringes? | | | | | - | - | - | | | a | Isochromatic fringes are obtained using monochromatic light, whereas isoclinic fringes are obtained using white light. | | | b | Isoclinic fringes are obtained when the principal stress direction coincides with the polarisation of the polariser; isochromatic fringes are lines of constant stress difference. | | | c | Isochromatic and isoclinic fringes occur in different types of plastic. | | | d | Isoclinic fringes are lines of constant stress difference; isochromatic fringes are obtained when the principal stress direction coincides with the polarisation of the polariser. | Going further = ### Websites * Part of the excellent Molecular Expressions website, includes interactive Java applets on double refraction, birefringent crystals and polarised light microscopy. * A site based at Nanyang Technological University, Singapore, including a paper on Recent Advances in Photoelastic Applications.
Aims On completion of this TLP you should: * Understand the atomic basis for piezoelectricity. * Understand how this basis scales up to a full effect on the macro scale. * Understand how piezoelectrics tie in with ferroelectrics and how their properties arise from the same basis. * Understand how the properties of piezoelectrics are put to use both industrially and commercially. Before you start This TLP should be fairly self-contained, but some knowledge of crystal structures is assumed. It may be helpful to read the TLPs on , and also before you start this TLP. Introduction The direct piezoelectric effect was originally discovered in 1880 by Jacques and Pierre Curie. It did not come into widespread use until the first world war, when quartz was used in SONAR. This was replaced in the second world war by Barium Titanate, and later by lead-based piezoelectrics that are in widespread use today. The piezoelectric dipole moment = When a piezoelectric is placed under a mechanical stress, the atomic structure of the crystal changes, such that ions in the structure separate, and a dipole moment is formed. For a net polarisation to develop, the dipole formed must not be cancelled out by other dipoles in the unit cell. To do this, the piezoelectric atomic structure must be non-centrosymmetric, i.e. there must be no centre of symmetry. Materials which are centrosymmetric, when placed under stress, experience symmetrical movement of ions, meaning there isn't a net polarisation. This can be seen in the following picture, in which there is an atom in a tetrahedral interstice. This material is ZnS, sphalerite, a Zn f.c.c. structure, which has an S atom in half of the tetrahedral interstices. ![Structure of ZnS (sphalerite)](images/img001.gif) Locally, in each interstice, there is no centre of symmetry, so when a stress is applied, the motion of the central atom results in a dipole moment. Consider a single tetrahedra: ![Structure of a single tetrahedra](images/img002.gif) When the central atom moves, a dipole moment forms: ![Structure of a single tetrahedra showing the dipole moment](images/img003.gif) See the section on in the Ferroelectric Materials TLP for more information on the associated mathematics. Polarisation The polarisation is defined as the dipole moment per unit volume: \[P = \frac{{\sum \mu }}{V}\] In a piezoelectric which is not ferroelectric, there is no spontaneous polarisation. An applied stress therefore, will generate a polarisation in every unit cell of the crystal, assuming the crystal is homogenous. This polarisation is therefore the same throughout the crystal, and will cause a charge to be developed on the surfaces of the piezoelectric, due to the large number of small charges moving. If the piezoelectric is placed in a closed circuit and subjected to a stress, then a current will be recorded, produced by the movement of charge from one face of the crystal to another. The polarisation can be described as the charge per unit area developed on the surface, as by the equation: \[P = \frac{Q}{A}\] This polarisation is directly proportional to the stress applied, as described by the equation: \[P = d\sigma \] where *P* = polarisation, *d* = piezoelectric coefficient, σ = stress. However, while this is a direct effect, the stress can be multi-axial, so d can be an array of coefficients. (Also called a 3rd rank tensor, but the meaning of this is beyond the scope of this TLP.) The reverse effect can also be seen if an electric field is applied to a piezoelectric. In a reverse process to the movement of atoms causing a dipole moment, the application of an electric field causes a dipole moment to be created in order to oppose the field. This dipole moment is created by the motion of atoms. This may result in the contraction or expansion of the unit cell. As this occurs throughout the crystal, there is a large change overall, which changes the shape of the crystal. (It must be noted however, that as there are a very large number of unit cells in the typical crystal, the actual shape change is small. The maximum strain usually seen is about 0.1%.) This effect is described by the equation: \[\varepsilon = dE\] where ε = strain, *d* = piezoelectric coefficient, *E* = electric field. Atomic basis of non-spontaneously polarised piezoelectrics Consider quartz, SiO2. In its non-stressed state, the ions are in positions which do not allow any net dipole moment to be formed. The structure of quartz is shown below: ![Structure of quartz](images/img004.gif) In quartz, there are tetrahedra of O atoms around Si atoms, which are able to twist and change shape when a stress is applied. The change in their position leads to the formation of net dipole moments as seen in the section. A tetrahedra of O atoms around a Si atom is marked within the quartz structure below: ![Structure of quartz](images/img005.gif) The dipole moment appears in every unit cell in the crystal and causes polarisation. ![Structure of quartz showing dipole moment](images/img006.gif) Spontaneously polarised piezoelectrics (on the atomic scale) Ferroelectrics are spontaneously polarised, but are also piezoelectric, in that their polarisation changes under the influence of a stress.This is because while all ferroelectrics are piezoelectric, not all piezoelectrics are ferroelectric. This relationship can be viewed as: ![Relationship between piezo-, pyro-, and ferroelectrics](images/img007.gif)Pyroelectrics are materials which typically experience a decrease in polarisation when their temperature is increased. They will not be considered in this TLP but a short aside on can be found in the Ferroelectric Materials TLP. The piezoelectric effect in ferroelectrics is very dependent on its atomic structure. Depending on the orientation of a crystal, applying a compressive stress can increase or decrease the polarisation, or sometimes, have no effect at all. To illustrate this, consider the tetragonal phase of BaTiO3, which is commonly seen at room temperature. It possesses a spontaneous polarisation, formed by the dipole moment in each unit cell. To make it simple, we will only consider a single unit cell first. Consider the unit cell of BaTiO3: ![Unit cell of BaTiO3](images/img008.gif) Below 120°C this unit cell becomes tetragonal, and gains a spontaneous dipole moment: ![Unit cell of BaTiO3 below 120oC](images/img009.gif) If the material is compressed along the x-axis, the important charged ions move further from their original positions, giving a higher dipole moment. ![Unit cell of BaTiO3 compressed along the x-axis](images/img010.gif) Compressed along the z-axis, the dipole moment decreases as the ions move towards their original position. ![Unit cell of BaTiO3 compressed along the z-axis](images/img011.gif) This shows how polarisation can easily arise on the atomic level. Spontaneously polarised piezoelectrics (on the macro scale) = Now, ferroelectric materials possess multiple domains. For background on this, read the TLP on . To make it simple, we will only consider single crystal ferroelectrics. These, when first made, have domains of the form: ![Diagram of single crystal ferroelectric domains](images/img012.gif) If a mechanical stress is applied to the ferroelectric, then there are domains which will experience an increase in dipole moment and some which will experience a decrease in dipole moment. Overall, there is no net increase in polarisation. This makes BaTiO3 useless as a piezoelectric unless it is put through some additional processing. This process is called *poling*. An electric field is applied to the ferroelectric as it passes through its Curie temperature, so as its spontaneous polarisation develops, it is aligned in a single direction: ![Diagram of aligned single crystal ferroelectric domains](images/img013.gif) All of the domains in the piezoelectric have a dipole moment pointing in the same direction, so there is a net spontaneous polarisation. Now, when a mechanical stress is applied, the polarisation will increase: ![Diagram of aligned single crystal ferroelectric domains under mechanical stress](images/img014.gif) or decrease: ![Diagram of aligned single crystal ferroelectric domains under mechanical stress](images/img015.gif) but still remain pointing in the original direction. This makes ferroelectrics into useful piezoelectrics. Depolarisation The poling effect turns ferroelectrics into useful piezoelectrics. However, this means they can only be used within certain well defined limits. If piezoelectrics are used outside of these limits, the alignment of dipoles can disappear, leading to the depolarisation of the ferroelectric, and removing its piezoelectric properties. This can occur in a number of ways. **1. Thermal depoling** If the material is exposed to excessive heat, such that its temperature approaches its Curie temperature, the dipole moments regain their unaligned state. At the Curie temperature, a ferroelectric becomes entirely unaligned. In order to prevent this occurring, it is necessary to use piezoelectrics well below their Curie temperature. **2. Electrical depoling**A strong electric field, when applied in the reverse direction to the already poled material, will lead to depoling. **3. Mechanical depoling** The stress placed on a piezoelectric can lead to depolarisation. Applications of piezoelectric materials = Piezoelectrics are used both commercially and industrially. Commercially, their most common use is as gas lighters. These are capable of producing a spark, as in this animation: Industrially, they are mainly used for imaging, mostly in medicine. They are used to produce ultrasound, which is used to check on unborn babies. In a non-medicinal manner, it can be used to detect cracks. However, this effect can be utilised, without generating the wave with the piezoelectric. In this way, the piezoelectric is used solely as a mechanical sensor. As it picks up a mechanical deformation, it generates a voltage, and this can be detected, allowing them to be used as sensors. Another very common use of piezoelectrics is in watches. A small piece of quartz crystal is used to regulate the movement of hands, as its shape will oscillate at some known frequency when a particular voltage is applied. This oscillation can be translated into a very accurate timekeeping device. For more information, see this external web page: A final possible use is that of an actuator. If the electric field applied over a piezoelectric is not oscillated, but instead simply applied, the change in shape of the piezoelectric can be used to move objects. This is useful in micro-scale positioning, as the change in shape of the piezoelectric can be measured in microns. For more information, see: PZT = PZT, or Lead Zirconium Titanate, Pb(ZrxTi1-x)O3, is the most widely used piezoelectric. It has the perovskite structure, with Zr and Ti ions randomly placed in the B sites in perovskite. ![PZT perovskite structure](images/img016.gif) Its composition is varied by altering the value x. This greatly changes the properties, giving the phase diagram below. ![Phase diagram](images/img017.gif) If the PZT is used at 50% Mol PbTiO3, then it is near the rhombohedral/tetragonal phase boundary. This allows it to form many different polarisation states, in the <100> and <111> directions. There are many possible orientations for dipole moments to form, giving easy poling, and making it a useful piezoelectric. Summary = We have considered a microscopic picture for piezoelectrics, and we have seen some examples of the many applications in which they are exploited. Ceramic piezoelectrics (e.g. PZT) normally contain lead, but there are now lead-free piezoelectrics, e.g. with bismuth replacing the lead. There are also piezoelectric polymers in which much larger strains come at the cost of much lower stresses. Questions = ### Quick questions*You should be able to answer these questions without too much difficulty after studying this TLP. If not, then you should go through it again!*1. What symmetry element must be absent for a material to be piezoelectric? | | | | | - | - | - | | | a | An axis of rotation | | | b | A mirror plane | | | c | A centre of symmetry | | | d | An improper axis of rotation | 2. Applying a mechanical stress to a piezoelectric does not cause which of these? | | | | | - | - | - | | | a | The formation of a dipole moment | | | b | The movement of atoms | | | c | Development of polarisation | | | d | The generation of an internal current | 3. The application of an electric field to a piezoelectric does not cause which of these? | | | | | - | - | - | | | a | A change in shape of the piezoelectric | | | b | The formation of a magnetic field | | | c | The movement of atoms | | | d | The formation of dipole moments to oppose the field | 4. What does 'poling' do? | | | | | - | - | - | | | a | Align ferroelectric dipoles to give a net polarisation in a piezoelectric | | | b | Produce a non-polarised ferroelectric | | | c | Increase the dipole moment in each unit cell | | | d | Produces a pole | 5. Which of these will not depolarise a poled ferroelectric piezoelectric? | | | | | - | - | - | | | a | Applying a very large stress to the piezoelectric | | | b | Applying a reversed electric field to the piezoelectric | | | c | Applying a current perpendicular to the polarisation | | | d | Applying a large amount of heat to the piezoelectric | Going further = ### Books * *Electroceramics* by A.J. Moulson and J.M. Herbert, Chapman and Hall, 1990 * *Piezoelectric Ceramics* edited by J. van Randeraat and R.E. Setterington, Mullard House, 1974 ### Websites * You may wish to look at the related TLP on .  
Aims At the end of this package you should: * Know what polymers are and understand their classification based on structure, properties and chemical composition. * Have a basic understanding of polymer science with which you can go on and tackle more advanced areas, such as the other polymer-related TLPs detailed on the page. Before you start This TLP assumes no prior knowledge of polymer science. An understanding of basic chemistry principles is advantageous but not required. Introduction **Polymers are large molecules made up of long sequences of smaller units**. They occur widely in nature and synthetic polymers are a very popular manufacturing material. Polymers are relatively inexpensive to produce on a large scale, and their microstructure can be easily controlled by using different starting materials and by processing them in different ways. This leads to a very wide range of possible properties that are useful for many applications. The basic structure of a polymer molecule is a long chain of covalently bonded atoms called the **backbone**. In most common synthetic polymers, the backbone is made of carbon. Attached to the backbone in a regular pattern are other atoms or groups of atoms called **side groups**. The simplest possible side groups are single hydrogen atoms, as in poly(ethene). The molecules that are bonded together during **polymerisation** to make up the polymer are called **monomers**. The arrangement of atoms in the monomer becomes the repeating pattern in the polymer - the **repeat unit**. ![Diagram showing equatio forpolymerisation of ethene](images/Polyehtylene.gif) Synthetic examples You can use the following Flash application to find out about some example synthetic polymers, and to view their monomers. Click and drag the 3D models to rotate them. Examples of biopolymers - Polysaccharides - Polysaccharides are a type of carbohydrate produced by the polymerisation of individual sugar molecules (monosaccharides). Used as an energy store: starch for plants and glycogen for animals, or for structural support, e.g. cellulose in plants. Polypeptides Polypeptides are formed by the polymerisation of amino acids. One or more polypeptide molecules make up a protein. Proteins have a wide variety of functions, which include providing structure and as enzymes, which catalyse chemical reactions. Collagen is the most abundant protein in the human body, being found in arterial walls, skin, muscle, cartilage, bone and walls of arteries. Amylase is an enzyme which catalyses the digestion of starch, which is a polysaccharide, into its constituent monosaccharides. Naming polymers = Polymer nomenclature can be a confusing area, because several different names exist for almost every polymer. These include the chemical names (which describe the chemical structure of the molecule itself), any brand names used for marketing the material, and acronyms or abbreviations for either of these. For example, the molecule poly(tetrafluoroethene) is sold under the brand name Teflon and abbreviated to PTFE. Another source of confusion is that there are several competing sets of rules for defining chemical names, not one single accepted system. Most of these involve writing **poly** followed by the **monomer name** in parentheses. For example, the polymer formed from the monomer tetrafluoroethene is simply poly(tetrafluoroethene). If the structure of the monomer is different to that of the polymer it is more common to use the chemical name of the repeat unit instead of the monomer. When the repeat units become very complex, yet more systems of names are used, for example for epoxies and polyamides (nylons). For clarity we wont deal with these here. International differences and colloquial names add one final level of complexity – for example, C2H4 is ethene in Europe but ethylene in North America. It is worth being familiar with the different names for some of the more common polymers, some of which you saw in the Introduction. Shape, size & structure I = Configuration and conformation **Configuration** is a property which encompasses: * The direction (head-to-tail, head-to-head …) in which the monomers are linked together. * The order (ABABAB…) in which monomers are joined together (in polymers with more than one type of monomer). The configuration is fixed during synthesis. ![Diagram of configuration of a polymer](images/configuration.gif) After synthesis, the molecule can change its shape by rotations about the single C-C bonds in the backbone. The particular arrangement of the chain due to these rotations is called the **conformation**. ![Diagram showing tetrahedral arrangtement around a central carbon atom](images/Tetrahedron.gif) The angles between bonds around a carbon atom in the backbone are approximately 109.5°. This is the **bond angle** and is contant. We can illustrate conformation using a **Newman projection**. This is a diagram which shows the arrangement of the side groups, looking down the C-C bond (represented by a circle) from one carbon atom to the other.  Newman projections clearly show the **torsion angle** – the angle through which one carbon atom is rotated relative to the next. There is a potential energy that comes from the interaction of side groups on adjacent carbon atoms in the chain. This energy varies as a function of the torsion angle, as side groups move relative to each other. The torsion angles with the lowest energy are more stable. These are called trans, gauche + and gauche -. The conformations that a chain will preferentially adopt are sequences of these three stable angles. The following application is an introduction to Newman projections using the example of butane. It also illustrates the change in potential energy with torsion angle for butane. The idea of Newman projections for butane can be extended to longer chains. In this case the CH3 groups on the diagram would be replaced by the symbol R, which stands for any group of atoms. This is shown below. ![](images/Newman-with-R.gif) Using the following application you can build a chain of poly(ethene) with your own conformation, choosing In order to change between favourable torsion angles, the molecule must pass through those with higher energies. This represents a potential energy barrier to conformation changes. The more energy available to a molecule, the more readily it may change its conformation, and its stiffness decreases. However, the more flexible the molecule, the more likely it is to adopt a random conformation, so the stiffness of the polymer material itself may actually increase with temperature. Much more detail on conformation & stiffness in the context of rubber can be found in the . Below a certain temperature, the glass transition temperature Tg, a molecules internal energy is low enough that changes in shape of a molecule cannot be effected by conformation changes, but must instead be accommodated by stretching intermolecular bonds. The properties of a polymer change drastically below its glass transition temperature. This area is explored in the . Polymer chain morphology A single polymer chain can exist in any one of its possible conformations, from a tight coil to a straight chain. The probability of it having a particular end-to-end distance increases with the number of possible conformations that would achieve that size. There is only one possible conformation that will produce a straight chain, but as the molecule becomes more coiled the number of possibilities increases. A polymer chain will therefore tend to coil up to some extent. The expected end-to-end distance of a chain can be estimated using a model in which a molecule is considered as being made up of a large number n of segments. Each segment is rigid, but is freely jointed at both ends, so that it can make any angle with the next segment. A model ‘molecule can then be built by adding each of the successive segments at a random angle, a procedure called a **random walk**. How does the random walk model compare to reality? * By the nature of a random walk, a model molecule may overlap itself. Real polymers have a finite volume, so a molecule cannot ‘crash into itself or other chains. * The random walk model does not take account of any complicating forces, such as the interaction of electrons in bulky side groups, which tend to inhibit bond rotation. * Atoms in the polymer backbone are not freely jointed. * As a result of these simplifications, performing a random walk where each segment is a single C-C bond gives an underestimate of the end-to-end distance, real polymer chains are stiffer than predicted by the model. * To take account of this, random walk segments are modelled as being several C-C bonds in length.We can then use a quantity called the **Kuhn length**, l, to represent the average length assigned to a model segment. * The Kuhn length varies for different polymers: it is longer for a stiffer molecule. * To illustrate this, here are some example Kuhn lengths (expressed as a multiple of the length of a C-C bond). | | | | | - | - | - | | **Polymer** | **Kuhn length / C-C bond lengths** | **Notes** | | Poly(ethene) | 3.5 | PE is very flexible (due to low torsional barriers) | | Poly(styrene) | 5 | PS has large side-groups which inhibit flexibility | | DNA | 300 | DNA is very stiff due to its double helix structure | Calculating the root mean square end-to-end distance of a random walk ‘molecule In two dimensions, we can estimate the distance from end to end of a molecule modelled by a random walk, given the Kuhn length and the number of segments. Each segment is represented by a vector, ![](images/morph1.gif) You can now test this model using the simulation below. Although this is a two-dimensional model, extending it to three dimensions gives the same result. Shape, size & structure II Individual chains can be linked together to produce a branched structure or a network. ![Diagram of branching arrangements of polymers](images/branching-arrangements.gif) Branching - Branches normally form because of side reactions that happen during the synthesis of a polymer. The degree of branching can be controlled to obtain different properties, for example by adding a second type of monomer chosen to promote branch formation. Cross-links - Whereas branching involves joining the head of one chain to a point in the middle of another, cross-linking joins two chains together at some point along their length. Cross-linking in a polymer forms a network structure. A network is in fact one giant molecule because the cross-links are primary chemical bonds. ![Diagram contrasting cross linking and branching](images/cross-branch.gif) Without cross-links, **van der Waals** forces hold adjacent molecules together. These are weak attractions, caused by the interaction of electrons. They break easily on heating, allowing the molecules to slide past each other. Substances made from un-cross-linked polymers are therefore able to melt, and are called **thermoplastics**. Cross-links are primary chemical bonds, which require much more energy to break than van der Waals forces. When a polymer of this type is heated, the covalent cross-links prevent individual molecules from sliding past one another, so melting does not occur. This type of polymer is called a **thermoset**. At a sufficiently high temperature, the covalent bonds both in the cross-links and within the molecules are broken, and the thermoset decomposes. A polymer with only a small degree of cross-linking is an **elastomer**. The cross-links are infrequent enough to allow significant conformation changes in the chains between cross-links, while still preventing individual molecules from flowing past each other. Elastomers can accommodate a large amount of recoverable deformation, behaviour typical of rubber. Much more information about the behaviour of elastomers is available in the . Cross-links may be added after synthesis by a separate process. One such process is vulcanisation, where cross-links are added to rubber by heating it with sulphur. ![diagram of vulcanisation process](images/Vulcanisation.gif) Stereoregularity Also known as **tacticity**, this property describes the regularity of the side group orientations on the backbone. The tacticity of a polymer has important implications for its degree of long-range order. The side groups of isotactic polymers all have the same orientation: ![Diagram of isotactic polymer](images/Isotactic.gif) Syndiotactic polymers have alternating arrangements of side-groups: ![Diagram of syniotactic polymer](images/Syndiotactic.gif) In atactic polymers the side-group orientations are random along the chain. ![Diagram of atactic polymer](images/Atactic.gif) Copolymers A polymer whose monomers are all identical is a **homopolymer**. However, polymers may be synthesised using two or more different monomers, and this produces a **copolymer**. This is a useful process; for example if two monomers each produce a homopolymer with a desirable property, a copolymer can be produced which combines them. The properties of the copolymer depend on the monomers and their configuration; these may be divided into four categories: alternating, random, block and graft (illustrated below). ![Diagram showing coplymers](images/Copolymers.gif) Crystallinity =Crystallinity defines the degree of long-range order in a material, and strongly affects its properties. The more crystalline a polymer, the more regularly aligned its chains. Increasing the degree of crystallinity increases hardness and density. This is illustrated in poly(ethene). HDPE (high density poly(ethene)) is composed of linear chains with little branching. Molecules pack closely together, leading to a high degree of order. This makes it stiff and dense, and it is used for milk bottles and drainpipes. The numerous short branches in LDPE (low density poly(ethene)) interfere with the close packing of molecules, so they cannot form an ordered structure. The lower density and stiffness make it suitable for use in films such as plastic carrier bags and food wrapping. Often, polymers are semi-crystalline, existing somewhere on a scale between amorphous and crystalline. This usually consists of small crystalline regions (**crystallites**) surrounded by regions of amorphous polymer. ![Diagram showing a semi-crystalline polymer](images/Crystallinity.gif) Factors favouring crystallinity - In general, factors causing polymers to be more ordered and regular tend to lead to a higher degree of crystallinity. * **Fewer short branches** – allowing molecules to pack closely together * **Higher degree of stereoregularity** - syndiotactic and isotactic polymers are more ordered than atactic polymers. * **More regular copolymer configuration** – having the same effect as stereoregularity This topic is covered in the . Synthesis = The process of turning monomers into a polymer is called polymerisation. Synthesis processes are classified using two different systems: the first according to the way in which the polymers grow, and the second according to the mechanism by which the chemical reactions occur. In the first system, synthesis processes are either chain growth or step growth: * In **chain growth** polymerisation, an initiator molecule starts the reaction. Monomers are then joined onto an initiated chain. This is a fast process that produces long chains soon after the reaction begins. A chain is terminated when no more monomers are available or when the chain reacts with another chain. * In **step growth** polymerisation, any monomer may react with any other, so no initiator is required. Monomers first join to form short chains (dimers, trimers…), which start to combine into longer chains once the supply of monomers begins to run out. Step growth is slower than chain growth; the process is terminated when all available monomers are used up. In the second system, synthesis processes are either addition reactions or condensation reactions: * The polymer is the only product of **addition reactions**. They often involve free radicals – chemical species having an un-paired electron. * **Condensation reactions** involve the loss of a small molecule such as H2O. Because of this the polymer is not the only product of the reaction. They can result in polymers whose repeat unit does not at first sight resemble the monomer. Chain growth polymerisation usually occurs by an addition reaction, and step growth polymerisation usually occurs by a condensation reaction. However there are several exceptions, which is why the two different systems of classification are needed. | | | | | - | - | - | | **Polymer growth mechanism** | Chain growth | Step growth | | **Chemical reaction mechanism** | Addition reactions | | | | Condensation reactions | The following application describes the mechanisms of the chain and step growth polymerisation. Effect of number of functional groups - The functional groups on a monomer are those that react during synthesis to form bonds with other monomers. The number of these influences the structure of the synthesis product. | | | | - | - | | **Number of functional groups on monomer** | **Resulting molecule** | | One | Dimer (a molecule formed from two monomers) | | Two | Straight chain polymer | | Three or more | Branched or network polymer | Molecular weight This section should strictly be called “Polymer molecular mass”, but in polymer science it is more common to refer to molecular weight than molecular mass, so this convention will be continued here. The molecular weight of a synthetic polymer does not have a single value, since different chains will have different lengths and different numbers of side branches.  There will therefore be a distribution of molecular weights, so it is common to calculate the average molecular weight of the polymer.  However, there are several different ways to define the average molecular weight, the two most common being the number average molecular weight and the weight average molecular weight.  Other averages exist, such as the viscosity average molecular weight, but they will not be discussed here. When studying a polymer, the most relevant average depends on the property being investigated: for example, some properties may be more affected by molecules with high molecular weight than those with low molecular weight, so the weight average is chosen since it highlights the presence of molecules with high molecular weight. The average molecular weight of a polymer sample can be determined using a variety of techniques, such as gel permeation chromatography, light-scattering measurements and viscosity measurements, and the type of average that is yielded depends on the technique.  Number average molecular weight, *M*N The number average molecular weight is defined as the total weight of polymer divided by the total number of molecules. Total weight \( = \sum\limits\_{i = 1}^\infty {{N\_i}{M\_i}} \) where Ni is the number of molecules with weight Mi Total number \( = \sum\limits\_{i = 1}^\infty {{N\_i}} \) The number average molecular weight is therefore given by: $$\overline {{M\_N}} = {{\sum\limits\_{i = 1}^\infty {{N\_i}{M\_i}} } \over {\sum\limits\_{i = 1}^\infty {{N\_i}} }}$$ This can also be written as: $$\overline {{M\_N}} = \sum\limits\_{i = 1}^\infty {{x\_i}{M\_i}} $$ where xi is the number fraction (or mole fraction) of polymer with molecular weight Mi. Weight average molecular weight, *M*w - The weight average molecular weight depends not only on the number of molecules present, but also on the weight of each molecule.  To calculate this, Ni is replaced with NiMi. $$\overline {{M\_W}} = {{\sum\limits\_{i = 1}^\infty {{N\_i}M\_i^2} } \over {\sum\limits\_{i = 1}^\infty {{N\_i}{M\_i}} }}$$ This can also be written as: $$\overline {{M\_W}} = \sum\limits\_{i = 1}^\infty {{w\_i}{M\_i}} $$ where wi is the weight fraction of polymer with molecular weight Mi. The weight average molecular weight is therefore weighted according to weight fractions. Polydispersity Index The polydispersity index is defined as the ratio of the weight average molecular weight to the number average molecular weight, and it gives a measure of the distribution of the molecular weight within a sample. It has a value greater than or equal to one: it is equal to one only if all the molecules have the same weight (i.e. if it is monodisperse), and the further away it is, the larger the spread of molecular weights. Polydispersity index \( = {\raise0.7ex\hbox{${\overline {{M\_W}} }$} \!\mathord{\left/ {\vphantom {{\overline {{M\_W}} } {\overline {{M\_N}} }}}\right.\kern-0em} \!\lower0.7ex\hbox{${\overline {{M\_N}} }$}}\) Molecular weight distributions The molecular weight distribution can be shown graphically by plotting the number of molecules against the molecular weight.  It is worth noting that these plots are sometimes shown with molecular weight decreasing along the *x*-axis. The distribution may be relatively simple, such as: ![Normal distribution number of molecules v molecular weight](images/molecular-weight-0.gif) Or it may be more complicated, such as: ![Normal distribution (more complicated) number of molecules v molecular weight](images/molecular-weight-1.gif) In many cases, it is important to know not only the average molecular weight, but also the distribution of molecular weights.  This is illustrated in the example below, in which no molecules would actually have a weight equal to the number average molecular weight, since this would lie between the two peaks.  ![Distribution number of molecules v molecular weight](images/molecular-weight-2.gif) Polymer identification ![](imagesT/it0003.jpg) This chart gives the overall plan of action for the following activity Summary = Polymers can be synthesised artificially from one or more type of monomers by chain growth or step growth polymerisation, or found in nature. Their names usually involve the name of the monomer(s) from which they were made. A polymer has a fixed configuration, but may change its shape by conformation changes – rotations around C-C bonds. The polymer may be modelled as a freely jointed chain of segments, each measuring one Kuhn length. The root-mean end-to-end distance of a random walk is given by ln½. Physical properties depend on structure: * light cross-linking produces an elastomer, * heavy cross-linking a thermoset and * no cross-links a thermoplastic. Properties also depend on crystallinity. The degree of crystallinity is controlled by the regularity of its structure, for example stereoregularity (isotactic, syndiotactic or atactic) and the amount of branching. Synthesis produces a distribution of molecular weights, which have a number average and a weight average. Identification tests can be carried out with freely available materials. These are the basics of polymer science and will allow you to move on to other polymer TLPs. Questions = ### Quick questions*You should be able to answer these questions without too much difficulty after studying this TLP. If not, then you should go through it again!*1. Below are pictures of 3 polymers, one each of atactic, syndiotactic and isotactic. Which option has the correct sequence? ![](images/Syndiotactic.gif) ![](images/Isotactic.gif) ![](images/Atactic.gif) | | | | | - | - | - | | | a | atactic, syndiotactic, isotactic | | | b | atactic, isotactic, syndiotactic | | | c | syndiotactic, atactic, isotactic | | | d | syndiotactic, isotactic, atactic | | | e | isotactic, syndiotactic, atactic | | | f | isotactic, atactic, syndiotactic | 2. Which of the following polymers would be likely to have the highest crystallinity: | | | | | - | - | - | | | a | An atactic polymer with many long side branches | | | b | An atactic polymer with few short side branches | | | c | An isotactic polymer with few short side branches | | | d | An isotactic polymer with many long side branches |### Deeper questions*The following questions require some thought and reaching the answer may require you to think beyond the contents of this TLP.*3. Kuhn length calculations a) A polyethylene chain is formed from 3000 ethene monomers. Given that the length of a single carbon-carbon bond is 0.154 nm, calculate the expected end-to-end distance using the random walk model, assuming that each bond is freely jointed. b) Given that the Kuhn length of polyethylene is equal to 3.5 C-C bond lengths, calculate an improved estimate of the expected end-to-end distance of the polymer chain. 4. A polymer sample was found to have the following molecular weight distribution: | | | | - | - | | **Number of molecules** | **Molecular weight / g/mol** | | 200 | 5,000 | | 300 | 10,000 | | 400 | 20,000 | | 100 | 40,000 | Going further = The following DoITPoMS TLPs cover more advanced topics relating to polymers and will provide lots of further reading. They are listed in an approximate decreasing order of relevance.
Aims On completion of this TLP you should: * Understand the concept of crystallinity in the context of polymeric materials * Be able to recognise and describe spherulites in semicrystalline polymers * Understand the mechanical behaviour of a polymer subjected to a uniaxial tensile stress, and the effects this has on the polymer's optical properties. Before you start You should be familiar with basic optical microscopy techniques and the concept of birefringence and photoelasticity in polymers. Awareness of X-ray diffraction and mechanical testing techniques would also be beneficial. The TLPs listed below contain useful, relevant material. * * * * * * Introduction Polymers are widely used materials. Their versatility makes them suitable for a whole range of applications, and comes from the capability of manufacturers to tailor microstructures and therefore properties through control of the processing conditions. An understanding of polymer crystallinity is important because the mechanical properties of crystalline polymers are different from those of amorphous polymers. Polymer crystals are much stiffer and stronger than amorphous regions of polymer. For example, high strength fibres can be produced from polyethylene whereas it is more commonly associated with applications such as carrier bags and plastic cups, where low cost and ease of manufacture are the key considerations in the choice of material. This TLP covers the formation of crystals in polymers, how they can be observed under the optical microscope, and the response of a semicrystalline polymer sample to uniaxial stress. Crystalline and amorphous polymers In ceramics or metals, a crystalline solid comprises repeating unit cells that contain each of the component atoms in the material. Each unit cell is composed of one or more molecular units. In a polymer this is not possible; the molecules are chains containing potentially millions of formula units. There is, however a repeating unit in a polymer - the monomer from which it was made. This must be the basis of both long and short-range order in a polymeric material. For example, a short section of linear poly(ethylene) looks like this: ![Schematic diagram of polymer chain](images/img001.gif) However, the conformation of the bonds around each carbon atom can be represented schematically as follows: ![Diagram of Newman projections](images/img002.gif) These diagrams are called Newman projections. The circle is a single C-H bond; and this diagram represents a projection along it. These two structures thus represent one half of the backbone continuing on either side of a C-C bond (trans), or both halves on the same side (gauche). Note that there are two possible gauche states, labelled gauche (-) and gauche (+). Whilst the trans conformation has a lower energy (since it's easier to position the hydrogen atoms on the carbon backbone further apart), an all-trans conformation would be a considerably more ordered structure than a random one - that is, it has a much lower entropy. Amorphous polymers are generally found in a random coil conformation and have a disordered chain structure. This is the most common structure of many polymers. Crystalline polymers are predominantly in the all-trans conformation, and the chains are arranged in lamellae, as below: | | | | - | - | | Pre-folded lamellae | Diagram showing polymer chains arranged in lamellae | The polymer crystal is made up from one-dimensional chain-folded sequences, shown on the above left, where the repeat distance is given by the chain spacing. To the above right is shown a schematic arrangement of folded chains into a two-dimensional lamella. An amorphous polymer has the maximum entropy conformation (given by the Boltzmann distribution), and the chains are arrayed randomly throughout the material, making atomic positions quasi-random as in any other glassy material. As a result of the difference between the amorphous and crystalline arrangements of polymer chains, the X-ray diffraction patterns of the two phases are very different. The amorphous phase contains no long-range order, meaning that there are no regular crystalline planes to diffract X-rays. Thus the incident X-rays are scattered randomly and there are no sharp peaks in the diffraction pattern. In the crystalline phase, the repeating lamellar chains provide a regular structure, thus the diffraction pattern will contain sharp, prominent signature peaks, the position of which depends on the exact spacing between chains. As the degree of crystallinity of a polymer affects its properties, accurately determining it is important. X-ray diffraction can be used to determine the degree of crystallinity of a sample. Thermal analysis techniques such as (DSC) can also be used. The two determinations may not necessarily be in agreement, and the reasons for this are complex. Spherulites and optical properties Since crystallisation in polymers follows a different process to that in metals - the laying down of successive lamellar layers of polymer chain - it produces a different structure. After nucleation, growth in most polymers is faster in one preferred direction. By convention this is called the b-axis. The other two axes (the c- and a-axes) grow at the same speed, and have no set direction provided they are orthogonal to the b-axis. Thus they are free to rotate. This means that polymer crystals grow in helical strands radiating from a nucleation point. Such growth leads to the formation of structures called spherulites.   ![Schematic diagram of spherulite](images/img015.gif) (Click on image to view larger version) On the left is a transmitted cross-polarised light micrograph of a spherulite in polyhydroxybutyrate (PHB), , where further details of the sample's history can be found. The photograph displays banding and a Maltese cross pattern. These features are characteristic of polymer spherulites viewed with cross-polarised light. The orientations of the polymer chains within a spherulite are shown schematically on the above right. Note that the lamella are growing radially, interspersed with amorphous material. The Maltese cross is seen because polymers are birefringent. Polarised light cannot travel through a crystalline polymer if the direction of the polarisation of the light is perpendicular to the direction of carbon chain in the polymer. As a result, when a sample is studied under crossed polars, only those polymer chains perpendicular to neither polariser nor the analyser are visible - these are at approximately 45° to each polaroid. The TLP goes into more depth on the subject of birefringence. The banded appearance of the image is also a consequence of birefringence. Due to a regular helical twist of the lamellae growing radially there are regions in which the polymer chain will be orthogonal to the polarisation of the light in the x-z plane, even if it is running at 45° to it in the x-y plane. These positions will occur at periodic intervals, once every full 360° rotation of the polymer chain. This translates to a given length outwards along each strand. This is then observed under crossed polars as alternating dark and light areas. (Click on image to view larger version) The image above is . It is an image of PHB spherulites viewed with a transmitted cross-polarised light microscope. A Maltese cross can be seen in each spherulite and each one has a banded appearance. The photograph shows a good example of impingement, which occurs when spherulites growing outwards from single nucleating points meet each other. Due to impingement they are unable to continue growing out radially in all directions. A polygonal microstructure is formed as seen in this photograph. Polymer stress-strain curve = Stress-strain curves show the response of a material to an applied (usually tensile) stress. They allow important information such as a material's elastic modulus and yield stress to be determined. Accurate knowledge of these parameters is paramount in engineering design. Polymer stress-strain curves are produced by stretching a sample at a constant rate through the application of a tensile force. By using a constant rate of testing the strain-rate dependency of polymer behaviour is not allowed to dominate. However it should be appreciated that polymers have a marked inherent time-dependence in their response to deformation, which sets their behaviour apart from other classes of material. The following interactive object shows an idealised polymer stress-strain curve and allows you to explore the different stages in the curve as the strain is increased. is a useful way of considering the phenomenon of necking and cold drawing. ### Temperature dependence As well as having significant time dependence, polymer behaviour is also temperature dependent. For example the plot below shows schematically different types of polymer stress-strain behaviour, which occur for different types of polymers and at different relative temperatures. The blue curve is for a glassy or semi-crystalline polymer below its glass transition temperature, the red curve is for a semi-crystalline polymer above its glass transition temperature and the green curve is for a rubber. ![Graph of stress vs strain showing temperature dependence of polymer behaviour](images/img008.gif) Stress-strain curve showing behaviour of different polymers and the effect of temperature The exact form of the stress-strain curve depends on the polymer under investigation. Indeed not all polymers exhibit necking and cold drawing. Polycarbonate is an example of a brittle polymer. Tensile testing of polyethylene = ### Method In this experiment samples of polyethylene are subjected to a tensile stress. They are stretched at a constant rate until they fail. The sample is dumbbell-shaped so that it can be firmly held by each set of grips. The test is carried out using a tensometer. The photograph below is of the Hounsfield H20K-W apparatus, which was used to perform the experiment.(Click on image to view larger version) First the sample is secured in place. The strain rate for the test is programmed into the machine, after which the test may begin. After the sample has failed and the test is complete the data is transferred to a computer, where it can be treated and presented as a stress-strain curve. The test is performed at different strain rates in order to demonstrate the time-dependence of the sample's behaviour. ### Video clips You can view videoclips of a typical test run displaying necking, drawing and fracture. The video of the whole test has been speeded up by a factor of 48 - in reality these thick polyethylene samples took around 30 minutes to break at a strain rate of 10 mm/min. By dragging the video position marker to and fro it become easier to see certain features such as the decrease in the amount of the sample of original width as necking proceeds. Note the position where fracture occurs. Your browser does not support the video tag. Tensile testing of polyethylene Your browser does not support the video tag. Fracture of polyethylene in tensile test### Results The figure below shows applied force plotted against extension for two polyethylene samples that underwent a tensile test until failure. The blue line is for a sample stretched at 10 mm/min, the pink is for 20 mm/min. ![Force plotted against extension for polyethylene samples](images/img012.gif) The blue line shows necking and an extensive period of cold drawing, which are characteristic features of the stress-strain behaviour of a semi-crystalline polymer above its glass transition temperature (Tg). For polyethylene, Tg= 148 K and the tests were carried out at room temperature (approx. 293 K). The pink line has a very different form, without the long flat section due to cold drawing seen in the blue line. At the higher strain rate (pink line) almost no cold drawing was able to take place before the sample snapped. This is a good demonstration of the time-dependent nature of the properties of polymers i.e. the observed behaviour is dependent on the strain rate. ### Discussion The results presented here show the difference in behaviour of polyethylene as a result of a different applied strain rate. All other variables were the same for both tests. No cold drawing is observed at the higher strain rate as it is not slow enough for the polymer chains within the sample to disentangle and reorganise themselves parallel to the direction of applied stress; at the lower strain rate there is enough time for cold drawing to occur. These experiments have highlighted the effect of changing the strain rate on the response of polyethylene to an applied tensile stress. A series of similar experiments could also be performed in order to compare the behaviour of a number of different polymers. Summary = Polymers contain ordered crystalline regions; chains fold to produce lamellae. When lamellae are nucleated from a small area, round agglomerates known as spherulites are formed. In a fully crystalline material they impinge on each other and appear polygonal under the microscope. In this TLP, polymer crystallisation and spherulite formation have been discussed. Micrographs of spherulites showing the distinctive Maltese cross were presented and their appearance was explained in terms of birefringence. From the section of the TLP dealing with mechanical testing, you should have gained an understanding of the mechanism of necking and cold drawing by which a semicrystalline polymer deforms under a uniaxial tensile stress above its glass transition temperature. Furthermore, Considère's construction, which is a simple way of determining whether a polymer sample will neck and cold draw, was derived. The stress-strain curves produced by such experiments should now be familiar, for varying strain rates and relative temperatures. Questions = 1. What is the main reason for the predominance of *trans* states in crystalline polyethylene? | | | | | - | - | - | | | a | Allows better chain packing to produce a denser structure | | | b | Each *trans* state is lower in energy due to steric hindrance effects | | | c | The all-*trans* conformation is entropically favoured | | | d | Formation of chain-folded lamellae requires *trans* states | 2. Rank these structural features of a quenched high molecular weight semi-crystalline polymer in order of their size, smallest first (based on average values - there will be some overlap in ranges): 1. Chain length 2. Lamellar thickness 3. Spherulite radius 4. Root-mean-squared end-to-end distance of chains | | | | | - | - | - | | | a | B, D, C, A | | | b | A, C, B, D | | | c | D, C, A, B | | | d | C, B, D, A | 3. PHB (polyhydroxybuturyate) is a naturally occurring, biodegradable polymer produced by bacteria, which melts at around 200°C and forms large spherulites readily on cooling. What sort of heat treatment would result in a solid with mechanical properties most suitable for a making a fizzy drinks bottle? | | | | | - | - | - | | | a | Annealing at 100°C for one hour | | | b | Heating to just above 200°C until melted, followed by rapid quench to room temperature | | | c | Heating to just above 200°C until melted, followed by rapid quench to room temperature, and annealing at 100°C for one hour | | | d | Heating to just 200°C until melted, followed by a slow cooling to room temperature | 4. During a tensile test of a polyethylene dumbbell sample at room temperature using a strain rate of 10 mm/min, at what position relative to the sample is failure most likely to occur? Justify your answer in terms of the changes in polymer chain orientation in the sample during tensile draw. | | | | | - | - | - | | | a | Exactly at the mid-point between the jaws of the tensile tester | | | b | Exactly in the middle of the neck | | | c | Equally likely at either end of the dumbbell, where the neck meets the amorphous region | | | d | At one end of the dumbbell, furthest from the point at which the necking originated | 5. How are polymer chains able to form crystallites? 6. Which distinctive shape is seen when observing a spherulite between crossed polars? Explain why this shape is seen. 7. State two techniques that could be used to determine the degree of crystallinity of a polymer. 8. Describe and explain the form of a stress-strain curve for a semi-crystalline polymer subjected to a uniaxial stress above its glass transition temperature. 9. Describe how Considère's construction can explain the phenomenon of necking and stable cold drawing in polymers. What other types of behaviour may be observed for a polymer sample subjected to a uniaxial stress?. What other types of behaviour may be observed for a polymer sample subjected to a uniaxial stress? 10. Compare and contrast features of the crystallisation in polymers with that in other types of materials, such as metals and ceramics. Going further = ### Books * *Plastics Microstructure and Engineering Applications,* N J Mills, Edward Arnold, 1993 (2nd edition). * *Fundamentals of Polymer Science - An Introductory Text*, Paul C Painter and Michael M Coleman, Technomic, 1994. * *Introduction to Polymers*, R J Young and P A Lovell, Chapman & Hall, 1991 (2nd edition). ### Websites * Describing itself as "a cyberwonderland of polymer fun", this multi-award-winning site contains a wealth of information about polymers, including a section on .
Aims On completion of this TLP you should be able to: * Introduce and explain the basic ideas of electrochemistry, including electrochemical potentials, half cell reactions and equilibria. * Describe the mechanisms of aqueous corrosion * Derive the Nernst equation and show how it can be used to derive Pourbaix diagrams. * Explain the information contained in a Pourbaix diagram, and demonstrate how this can be used to predict corrosion behaviour. Before you start This TLP is largely self-explanatory, but a basic knowledge of logarithms and thermodynamics may be of some help. A general introduction to relevant thermodynamics can be found . Introduction **Corrosion** is the wastage of material by the chemical action of its environment. It does not include mechanisms such as erosion or wear, which are mechanical. **Aqueous** **corrosion** is the oxidation of a metal via an electrochemical reaction within water and its dissolved compounds. Aqueous corrosion is dependent on the presence of water to act as an ion conducting electrolyte. An understanding of aqueous corrosion is essential for all industries. The lifetime and safety of chemical plants, offshore platforms and ships are all dependent on controlling and predicting corrosion rates and products. This TLP introduces the concepts of electrochemical equilibrium reactions, electrode potentials, construction of Pourbaix diagrams using the Nerst equation and their interpretation. A Pourbaix diagram is a plot of the equilibrium potential of electrochemical reactions against pH. It shows how corrosion mechanisms can be examined as a function of factors such as pH, temperature and the concentrations of reacting species. Background <! .style1 {color: #FF0000} >Electrode potentials The electrode potential, E, of a metal refers to the potential difference measured (in volts) between a metal electrode and a reference electrode. Ee is the equilibrium potential (or reversible potential), which describes the equilibrium between two different oxidation states of the same element, at whatever concentration (or pressure) they occur. Ee varies with concentration, pressure and temperature. It describes the electrode potential when the components of the reaction are in equilibrium. This does NOT mean that they are in equilibrium with the standard hydrogen electrode. It means only that the reaction components are in equilibrium with each other. In the reaction Az+ + ze- = A, a concentration, CAz+, of AZ+ is in equilibrium with solid A. The reaction moves away from equilibrium only if there is a source or sink for electrons. If this were the case, then the potential would move away from Ee. E0, the standard equilibrium potential (or standard electrode potential), is defined as the equilibrium potential of an electrode reaction when all components are in their standard states, measured against the standard hydrogen electrode (SHE). It describes the equilibrium between two different oxidation states of the same element. E0 is a constant for a given reaction, defined at 298 K. Values of E0 for various electrochemical reactions can be found in data books. At equilibrium, the chemical driving force for an electrochemical reaction, ΔG is equal to the electrical driving force, Ee. ΔG corresponds to a charge, zE, taken through the potential, Ee. The measured potential for an electrochemical reaction is therefore directly proportional to its free energy change. ΔG = -zE Ee where z is the number of moles of electrons exchanged in the reaction and E is Faradays constant, 96 485 coulombs per mole of electrons. Similarly, under standard conditions, ΔG0 = -zE E0 Aqueous corrosion - ![Diagram describing aqueous corrosion](images/corrosion_picture_1A_%20andy_C.png) Oxidation of a metal in an aqueous environment is dependent on potential,E, and pH. If oxidation does occur, the metal species is oxidised and loses electrons, forming metal cations; and a corresponding reduction reaction that consumes electrons at the cathode In aqueous corrosion water is the electrolyte, an ion-conducting medium. This means that the sites of oxidation and reduction can be spatially separate. This is different from a gaseous environment, as a gas cannot conduct ions. A metal oxidising to produce metal ions may dissolve into the water, resulting in corrosion. This is different from corrosion in a gas, where the oxidised metal stays where it is produced, as an oxide film on the metal. Electrochemical half-cell reactions - A half-cell reaction is an electrochemical reaction that results in a net surplus or deficit of electrons. It is the smallest complete reaction step from one species to another. Although this reaction may proceed as a sequence of more simple reactions, these intermediate stages are not stable. A half-cell reaction can either be a **reduction**, where electrons are gained, or an **oxidation**, where electrons are lost. The following mnemonic is often helpful: **OILRIG**: **O**xidation **I**s **L**oss; **R**eduction **I**s **G**ain (of electrons). The **anode** is the site of oxidation –where electrons are lost. The **cathode** is the site of reduction –where electrons are gained. Anions, such as O2-, are negatively charged ions, attracted to the anode. Cations, such as Fe2+, are positively charged ions, attracted to the cathode. ### Reduction half-cell reactions Reduction reactions occur at the cathode and involve the consumption of electrons. In corrosion these normally correspond to reduction of oxygen or evolution of hydrogen, such as: O2 + H2O + 4e- = 4OH- O2 + 4H+ + 4e- = 2H2O 2H2O + 2e- = H2 +2OH- 2H+ + 2e- = H2 ### Oxidation half-cell reactions Oxidation reactions occur at the anode and involve the production of electrons. For the corrosion of metals, these reactions normally correspond to the various metal dissolution or oxide formation reactions, such as: Fe = Fe2+ + 2e- Fe2+ = Fe3+ + e- Fe + 2OH- = Fe(OH)2 + 2e- 2Fe + 3H2O = Fe2O3 + 6H+ + 6e- In addition to causing corrosion, oxidation may result in the formation of a *passive* oxide. The passive oxide produced may protect the metal beneath from further corrosion –significantly slowing further corrosion. An example of such passivation is that of aluminium in water, where aluminium is oxidised to from a layer of Al2O3 that protects the metal beneath from further oxidation. Reference electrodes Since only differences in potential can be measured, a benchmark electrode is required, against which all other electrode potentials can be compared. The particular reference electrode used must be stated as part of the units. ### The Standard Hydrogen Electrode (SHE) The electrode reaction 2H+ + 2e- = H2 is defined as having an electrode potential, EH+/H2 of zero volts, when all reactants and products are in the standard state. The standard chemical potential of H+ at 1 molar (M) concentration is by definition equal to zero. The standard state is defined as 298 K, 1 bar pressure for gases and a concentration 1 molar (1 mol dm-3) for ions in aqueous solution. As a direct result of this, the standard hydrogen electrode (SHE) is commonly used as a reference electrode. When coupled with an electrode, the potential difference measured is the electrode potential of that electrode, as the SHE establishes by definition the zero point on the electrochemical scale. The standard hydrogen electrode consists of a platinum electrode suspended in a sulphuric acid solution with a one molar concentration of H+. Purified hydrogen is bubbled through to equilibrate the 2H+ + 2e- = H2 electrode reaction. [Pop-up for ] ![Diagram of electrochemical cell](images/electrode_potential.gif) The diagram above shows how the standard potential,E0 of nickel can be determined. The nickel electrode contains Ni2+ ions in equilibrium with nickel metal. The hydrogen electrode is linked via a salt bridge to the deaerated solution in which the nickel electrode is immersed. This permits charge transfer and potential measurement but not mass transfer of the acid solution in the electrode. When Ee or E0 are measured relative to the SHE (or some other reference electrode), a voltmeter is used. The voltmeter is required to have a high impedance to resist any current flowing between the electrode and the SHE. If a current were allowed to flow, the electrodes would become polarised and would no longer be at equilibrium. In practice, it is often difficult or impossible to determine experimentally the standard electrode potential for electrochemical systems. Many systems lie outside the water stability zone or are passive. For example, zinc will immediately begin to oxidise when immersed in water. It is very simple to determine accurately the standard equilibrium potential from the equation linking chemical driving force with the electrical driving force, ΔG0 = -z*F* E0 Now ΔG0, the standard free energy of formation can be expressed as Δ*G*0 = μ0(products) − μ0(reactants) where μ0 is the standard chemical potential. By combining these equations, \[{E^0} = \frac{{\Delta {G^0}}}{{zF}} = \frac{{{\mu ^0}({\rm{products}}) - {\mu ^0}({\rm{reactants}})}}{{zF}}\] To obtain a standard equilibrium potential, E0, for an electrochemical reaction, all that is required is to look up relevant values of standard chemical potential. How corrosion of metal occurs - If a metal surface is immersed in an electrolyte such as water, metal ions tend to be lost from the metal into the electrolyte, leaving electrons behind on the metal. This will continue to occur until the metal reaches its equilibrium potential and the system comes to equilibrium, with a certain concentration of dissolved ions. The metal is at its equilibrium potential, Ee. If the electrolyte were to be continuously replaced (by water flowing in through a pipe for example), more and more metal ions would be lost, resulting in continuous corrosion of the metal. A cathodic reaction may occur that uses up the electrons lost by the metal species. In the reaction, 2H+ + 2e- = H2 if the hydrogen gas evolved is lost from the system, the reaction is prevented from reaching equilibrium. The cathodic reaction acts as a sink for electrons liberated in the oxidation reaction of the metal. As a result of this, the metal will not reach its equilibrium potential. The metal oxidation reaction is therefore not in equilibrium and carries a net reaction. The difference between the potential, E, of the metal and its equilibrium potential, Ee is called the overpotential and is given the symbol η. η = E − Ee As corrosion occurs, the mass of metal is reduced due to the conversion of atoms to ions, which are subsequently lost. The sites of oxidation (the anode) and reduction (the cathode) can both be situated on the same piece of metal – there is no need for an external electrode to be present for the process to occur. Rules for balancing electrochemical equations - The aim of this procedure is to balance electrochemical equations in terms of electronic charge and moles of components, given the main reaction product and reactant. By convention, electrochemical reactions are written as the REDUCTION of the species concerned, proceeding to the right. The species with the lower oxidation state is written on the right hand side. The rules are as follows: 1. Write down the main reaction components, with the reduced form (the form with the lowest valency) on the right. 2. Add stoichiometric numbers to balance the number of metal atoms. (Dont worry about charge or oxygen being balanced at this point). 3. Balance the number of oxygen atoms by adding H2O to the appropriate side. 4. Balance the number of hydrogen atoms by adding hydrogen ions (H+) to the appropriate side. 5. Balance the residual charge by adding electrons (e-) to the appropriate side. Now each side of the equation has the same number of atoms of each element and the same overall charge. ### Examples of balancing electrochemical reactions Find the electrochemical reaction for an equilibrium between Cr2O3 and CrO42- 1. Write reduced species on right CrO 42-  →  Cr2O3 2. Balance Cr metal atoms 2 CrO 42-  →  Cr2O3 3. Balance oxygen atoms with water 2 CrO 42-  →  Cr2O3 + 5 H2O 4. Balance hydrogen atoms with hydrogen ions 2 CrO 42- + 10 H+  →  Cr2O3 + 5 H2O 5. Balance charge with electrons 2 CrO 42- + 10 H+ + 6 e-  →  Cr2O3 + 5 H2O Check: Each side of the equation has: Two Cr, eight O, ten H and zero residual charge – so it is balanced. Click for more examples: The Nernst equation = The Nernst equation links the equilibrium potential of an electrode, Ee, to its standard potential, E0, and the concentrations or pressures of the reacting components at a given temperature. It describes the value of Ee for a given reaction as a function of the concentrations (or pressures) of all participating chemical species. In its most fundamental forms, the **Nernst equation for an electrode** is written as: \[{E\_e} = {E^0} - \frac{{2.303RT}}{{zF}}\log \frac{{[{\rm{reduced}}]}}{{[{\rm{oxidised}}]}}\] or \[{E\_e} = {E^0} - \frac{{RT}}{{zF}}\ln \frac{{[{\rm{reduced}}]}}{{[{\rm{oxidised}}]}}\]  [Click here for a – popup] R is the universal gas constant (8.3145 J K-1 mol-1) T is the absolute temperature z is the number of moles of electrons involved in the reaction as written F is the Faraday constant (96 485 C per mole of electrons) The notation [*reduced*] represents the product of the concentrations (or pressures where gases are involved) of all the species that appear on the reduced side of the electrode reaction, raised to the power of their stoichiometric coefficients. The notation [*oxidised*] represents the same for the oxidised side of the electrode reaction. Explanation of Example 1 - In the reaction O2 + 4H+ + 4e- = 2H2O water is the reduced species and the oxygen gas is the oxidised species. By convention, electrochemical half-equations are written as Oxidised State  +  *ne*-  ⇌  Reduced State Taking into account the stoichiometric coefficients of the species, the log term of the Nernst equation for this reaction appears as \[\log \frac{{{{[{H\_2}0]}^2}}}{{{p\_{{O\_2}}}{{[{H^ + }]}^4}}}\] Some of the species that take part in electrode reactions are pure solid compounds. The standard state for these compounds is unit mole fraction, and as they are pure, and are in their standard states. In dilute aqueous solutions, water has an overwhelming concentration, so it may be considered pure. The standard state for a gas is taken as 1 *atm* (or 1 bar) and the standard state for solutes (such as ions) is taken as 1 *mol dm-3*. The log term of the Nernst equation can now be reduced to \[\log \frac{1}{{{p\_{{O\_2}}}{{[{H^ + }]}^4}}}\] The Nernst Equation under standard conditions: At 298.15 K (25 °C), the numeric values of the constants can be combined to give a simpler form of the **Nernst equation for an electrode under standard conditions**: \[{E\_e} = {E^0} - \frac{{0.0591}}{z}\log \frac{{[{\rm{reduced}}]}}{{{\rm{[oxidised]}}}}\] This equation can be applied both to the potentials of individual electrodes and the potential differences across a pair of half-cells. However, it is generally more convenient to apply the Nernst equation to one electrode at a time. General expression of the Nernst Equation - Taking the general equation for a half-cell reaction as, aA + mH+ + ze− = bB + H2O the Nernst equation becomes \[{E\_e} = {E^0} + \frac{{0.0591}}{z}\log \frac{{{{[A]}^a}}}{{{{[B]}^b}}} - \frac{m}{z}0.0591pH\] Construction of a Pourbaix Diagram A Pourbaix diagram plots the equilibrium potential (Ee) between a metal and its various oxidised species as a function of pH. The extent of half-cell reactions that describe the dissolution of metal M  =  Mz+  +  ze- depend on various factors, including the potential, E, pH and the concentration of the oxidised species, Mz+. The Pourbaix diagram can be thought of as analogous to a phase diagram of an alloy, which plots the lines of equilibrium between different phases as temperature and composition are varied. To plot a Pourbaix diagram the relevant Nernst equations are used. As the Nernst equation is derived entirely from thermodynamics, the Pourbaix diagram can be used to determine which species is *thermodynamically* stable at a given E and pH. It gives no information about the *kinetics* of the corrosion process. Constructing a Pourbaix Diagram - The following animation illustrates how a Pourbaix diagram is constructed from first principles, using the example of Zinc. Anatomy of a Pourbaix Diagram = The Pourbaix diagram provides much information on the behaviour of a system as the pH and potential vary. The following animation explains how a Pourbaix diagram is built up from fundamentals. Examples of a Pourbaix Diagram ![Pourbaix diagrams for gold, zinc and aluminium](images/PDcomparison.png) Golds Pourbaix diagram explains why it is the most immune substance known. It is immune in all regions in which cathodic reactions can take place. So gold *never\** corrodes in an aqueous environment. Immunity of aluminium only occurs at lower potentials. Therefore, unless under conditions that cause it to passivate, it is much more susceptible to corrosion than gold or zinc. \* provided that the water is pure; that no ion complexes are present to provide a cathodic half cell reaction that occurs at a potential higher than +1.5 V(SHE). Constructing a 3D Pourbaix Diagram A Pourbaix Diagram does not have to be limited to two dimensions. Three (or higher) dimension diagrams can be constructed by varying other parameters such as concentration or temperature. Constructing a 3D Pourbaix Diagram Summary = In this package: * the concepts of equilibrium potential and their measurement are introduced. * electrochemical half-cells are defined and treatment of electrochemical equations demonstrated. * the physical and chemical processes which lead to aqueous corrosion are examined. * the Nernst Equation has been derived and the way in which it links measured potentials at various conditions with standard equilibrium potentials is discussed. * the way in which some electrochemical reactions have equilibrium potentials that vary as a function of pH is considered and the concept and derivation of a Pourbaix diagram introduced. * through the use of specific examples, the characteristics of Pourbaix diagrams and their uses are examined. The stability of water is demonstrated through the use of the Pourbaix diagram. * cathodic and anodic reaction lines on Pourbaix diagrams are discussed and the way in which a point on the diagram corresponds to physical corrosion examined. Questions = **Quick questions** You should be able to answer these questions without too much difficulty after studying this TLP. If not, then you should go through it again! Going further = ### Books * J.M. West: "*Basic Oxidation and Corrosion*" * Ellis-Horwood L.L. Shreir, R.A. Jarman and G.T. Burstein: "*Corrosion*", third edition, Butterworth-Heinemann ### Websites * - introduces the mechanism of aqueous corrosion and the associated kinetics (DoITPoMS).  
Aims On completion of this TLP you should: * Understand the factors that control the dynamics of solid particles in fluids * Be familiar with the main method for measurement of particle size distribution * Be able to make simple calculations relating to the flow of fluids through permeable media, such as filters * Understand how powders are commonly handled and processed to make solid artefacts or surface coatings Before you start There are no special prerequisites for this TLP. Introduction Handling and processing of powders is central to many areas of science and technology.  Most ceramic materials can only be formed via powder processing, but powder metallurgy is also an important branch of materials science and polymers are frequently handled as (coarse) particles.  Powder processing is also pivotal in many other industrial sectors, such as food, pharmaceuticals, agriculture, mining etc.  Moreover, an understanding of the behaviour of assemblies of solid particles in fluids is essential in areas as diverse as filtration of Diesel exhaust and avoiding explosions in flour mills.  Particle sizes of interest vary from a few nm to several mm (although most technological activity is focussed on the range from 100 nm to 100 microns). Important background includes the factors affecting the dynamics of solid particles suspended in fluids and the flow of fluids through porous mdia (including filters).  This TLP provides an introduction to the equations and dimensionless numbers that control these effects, with simulations that allow the behaviour to be explored in particular cases.  There are also sections dealing with measurement of particle size and the procedures involved in consolidation of powder particles into solid articles, via various types of sintering process, and the creation of surface coatings via thermal spraying of particles. Powder particles in a fluid stream Particles in a fluid can experience several different types of force. The main ones are:  (a) *Gravitational / Buoyancy* (usually due to the particle being more dense than the fluid, creating a force in the direction of a gravitational, or centrifugal, acceleration), (b) *Drag (Viscous) (*a velocity difference between particle and fluid leading to a drag force acting to reduce this difference) and (c) *Electrostatic* (due to particles becoming electrically charged and experiencing forces if electric or magnetic fields are present). A key characteristic of fluid flows is the ***Reynolds number***, *Re*, representing the ***ratio of inertial to viscous forces***.  Low values (*Re* < ~10 – 500) imply that viscous forces predominate, so that flow tends to be ***laminar and smooth***, whereas large values lead to ***turbulence***, with ***chaotic eddies*** and ***instabilities***.  An expression for the Reynolds number, and some comments about its magnitude, . For ***laminar flow***, the basic equation of fluid motion (the ***Navier-Stokes equation***) can be solved to give ***Stokes law*** (relating the ***frictional (drag) force*** on a body to its size and relative velocity in a fluid), . By setting this force equal to the net gravitational force, the ***steady state (terminal) velocity*** of a (spherical) body falling under gravity in a static fluid can be obtained as: \[{u\_{{\rm{ter}}}} = \frac{{{d^2}\Delta \rho g}}{{18\eta }}\] where Δρ is the difference in density, d is the particle size, g is the acceleration due to gravity and ν is the viscosity of the fluid.  For a typical (ρ ~ 4000 kg m-3) powder particle, of diameter 100 μm, falling freely in air, this velocity is about 1 m s-1.  Of course, for smaller particles it is much lower (uter ~ 0.1 mm s-1 for 1 μm diameter), so that such ***fine particulate*** has a very ***slow sedimentation rate*** and readily tends to become, and remain, ***airborne***.  The simulation below allows exploration of terminal velocity values for various cases, and also shows how this velocity is approached by a particle that is initially at rest in the fluid. Once 'Start' has been clicked, the particle size and density, and the type of fluid, can be changed while the simulation is running, which may be helpful in exploring the sensitivity of the behaviour to these variables.Provided below are four short videos, showing the outcome of inverting small transparent containers in which there are particles suspended in a liquid. For two of them, the liquid is water, while for the other two it is (rapeseed) oil. In each case, there are either small (average diameter ~ 30 µm) or large (average diameter ~ 200 μm) particles present in the liquid. (The size distribution for these two sets of particles is shown at the end of this page: the particles are volcanic ash, with a density of about 2.5 Mg m-3, so that Δρ is in all cases ~ 1.5 Mg m-3) For coarse particles in water, the terminal velocity is ~ tens of mm/s, so particles pass though the field of view in about a second or so and it takes less than a minute for essentially all of the particles to have fallen to the base of the container. Your browser does not support the video tag. Coarse particles in Water (Viscosity ~ 10-3 Pa s) - uter ~ 30 mm s-1 With finer particles, the terminal velocity is ~ 1 mm/s. Particles now take something like 20 s to pass though the field of view and they are more susceptible in the initial period to being carried upwards by the convection currents. In this case, it takes several minutes for the liquid to become clear. Your browser does not support the video tag. Fine particles in Water (Viscosity ~ 10-3 Pa s) - uter ~ 1 mm s-1 The rapeseed oil has a much higher viscosity than water, so even the coarse particles sediment slowly, taking several tens of seconds to pass through the field of view and it takes about 20 minutes for the liquid to become clear again. Your browser does not support the video tag. Coarse particles in Oil (Viscosity ~ 5 10-2 Pa s) - uter ~ 0.5 mm s-1 Fine particles in the oil have a very low terminal velocity (of a few tens of microns per second). They can take periods of a minute or more to pass through the field of view, and in fact it can be seen that particles can readily be carried upwards by convection currents. However, another effect is also apparent here - there is a tendency for these fine particles to clump together in this viscous liquid and relatively dense agglomerates of this type behave differently, tending to fall with a higher velocity, in a similar way to that expected with much larger particles. Your browser does not support the video tag. Fine particles in Oil (Viscosity ~ 5 10-2 Pa s) - uter ~ 0.02 mm s-1 The Particle Size Distributions for the two powder samples in the above videos are shown below. (These were obtained by Laser Scattering - see the next page.) ![PSD for video samples](images/1-PSDs fine&coarse VA powders.jpg) Particle size measurement by light scattering = There are several techniques for measuring the Particle Size Distribution (PSD) of a powder, but the most popular is based on the way that particles scatter light, which depends on their size. Such scattering is very commonplace. A light beam passing through a column of smoke is reduced in intensity, but little of this reduction is due to actual absorption of light by the particles and, if the column were to be viewed from the side in a darkened room, it would be clear that most of the "lost" light is being scattered sideways. This is mainly Fraunhofer scattering, similar to that emerging from a narrow slit. The corresponding . Measurement of the intensity of light scattered from a dispersion of particles, as a function of scattering angle, thus allows the particle size distribution (PSD) to be deduced. Particles are normally dispersed in a liquid, which may need to be continuously stirred, and laser light is used, giving high incident intensities at a fixed wavelength. Software is supplied that allows the PSD to be automatically produced after a short time (~1 minute) of measurement. The range of particle size for which this technique is reliable is from around 1 μm up to ~100 μm, which is satisfactory for many powders. The presence of smaller particles can be analysed via the effect that Brownian motion has on them, creating small fluctuations in the intensity of light scattered through large angles. These fluctuations can be detected, allowing measurement of particles down to about 0.1 μm, although specialised experimental set-ups are required for this.The image below shows a set-up for measurement of PSD via scattering of laser light (http://www.lsinstruments.ch/technology/small\_angle\_light\_scattering/) ![Set-up of PSD via scattering of laser light](images/ferrilens3.gif) The two videos below were obtained using a set-up similar to that shown above, with the two powders used in the previous page suspended in rapeseed oil. (As indicated there, these particles remain in suspension for extended periods.) The first shows the outcome with the coarser particles. It can be seen that most of the scattered light is being diffracted through an angle of about 0.2-1˚, which is broadly consistent with the Fraunhofer equation (see above). The stochastic nature of the scattering events is apparent in this video. Your browser does not support the video tag.Laser Scattering with 200 μm particles in suspension The corresponding video for the 30 μm powder particles can be seen below. It can be seen that the average scattering angle is now somewhat larger - up to about 2˚, again in approximate agreement with the Fraunhofer equation. Also, while there are again random fluctuations as the individual particles tumble through the laser beam, the fact that they are finer, and there are more of them, leads to a slightly more uniform and constant image than with the coarser particles. Your browser does not support the video tag.Laser Scattering with 30 μm particles in suspension Particle impact on substrates = The likelihood of a particle in a fluid stream striking a solid obstacle depends on the **Stokes Number, Stk**, which is the ratio of the characteristic time needed for the particle to change its velocity to that needed for it to pass the obstacle. If Stk>>1, then particles will strike the obstacles, while Stk<<1 means that most particles are expected to pass around them within the fluid stream. Derivation of the It can be seen from the expression for Stk that particle size is important, with finer particles more likely to be carried around an obstacle without striking it, while coarser (and more dense) particles cannot change their velocity so quickly and are likely to carry on in a straight line, so that impact occurs. Fluid velocity and viscosity are also relevant, with high velocity and low viscosity also favouring impact. These effects can be quantitatively explored in the simulation shown below - click on "start" to inject the particles, after setting the parameter values. (Note that the substrate (obstacle) size in this simulation is fixed at the rather low value of 0.3 mm, and the velocity range is only up to 1 m/s: in the next page, this formulation is used in a simulation of thermal spraying, in which a high velocity air stream carries particles towards a substrate.) Case study - thermal spraying = Thermal spraying is widely used to produce high quality coatings, mostly ceramic or metallic, and, to a lesser extent, for purposes of repair or to build up a shaped component. It is most commonly carried out by feeding powder into a high temperature, high velocity gas stream, where the particles gain heat and momentum (and may or may not closely approach the temperature and/or velocity of the surrounding gas). Particles are carried by the gas stream towards a substrate, on which they are expected to impinge, deform and adhere. The issue of whether a particle in a fluid stream is likely to strike an obstacle is covered in the previous page: it's controlled by the Stokes number. As explained there, very fine particles are unlikely to strike surfaces. During thermal spraying, velocities are quite high (u ~100 m s-1), favouring impact, but obstacles (substrates) are relatively large (D >~10 mm), with the opposite effect (giving the particle more time to change direction). An estimate of minimum particle size for impact (by setting Stk ~ 1), for a particle with a typical density (~3 × 103 kg m3), is given by \[{d\_{\min }} \approx \sqrt {\frac{{18\pi D}}{{\Delta \rho u}}} \approx \sqrt {\frac{{18({{310}^{ - 5}})(0.01)}}{{(3 \times {{10}^3})(100)}}} \approx 5\mu {\rm{m}}\] In general, therefore, thermal spraying is not carried out with fine particles and a typical size range would be ~20-100μm. The treatment on the previous page can also be used to explore times and distances over which particles will be accelerated to velocities close to that of the gas (which may itself be dropping off with distance ahead of the torch). The expression (for the characteristic time for a particle to change its velocity (so that it's close to that of the gas) leads to times of ~10 ms and 40 ms for particle diameters of 50 μm and 100 μm. Gas velocities in thermal spraying range from several tens of m s-1 to (supersonic) speeds of several hundred m s-1. Taking 100 m s-1 as representative, the distances needed for particles with these two sizes to approach that speed (assuming linear acceleration over the above periods) are about 0.5 m and 2 m. This is, of course, a crude estimate, and there is no need for the particle velocity to closely approach that of the gas, but it does show that a relatively large "stand-off" distance may be needed for particles to reach peak velocity. The other main issue is heat transfer from gas to particle. This is complex, but can be simplified by use of a heat transfer coefficient (interfacial thermal conductance), hi, which relates the heat flux into a particle to the temperature difference between particle surface and nearby gas. The value of hi can be expressed as Kg / δ, where Kgis gas conductivity and δ the thickness of a thermal boundary layer in the gas around the particle. The value of δ depends on relative velocity, gas viscosity, particle size and shape etc, so generalisations are difficult. Typically, however, hi ranges from ~100 kW m-2 K-1 (fine particle, high relative velocity) to ~1 kW m-2 K-1 (coarse particle, low relative velocity). Since Kg for most gases is ~0.02 W m-2 K-1, corresponding boundary layer thicknesses are ~0.2 μm and ~20 μm. A value of hi allows estimation of heating rates, and also of whether particles remain isothermal or develop large internal thermal gradients. The latter can be decided from the value of the Biot number (ratio of conductance of interface to that of the particle): \[{\rm{Bi }} \approx \frac{{{h\_i}}}{{K/L}} \approx \frac{{{h\_i}(d/2)}}{{{K\_p}}}\] Using the figures above, and taking Kp to be ~5 W m-2 K-1 (ceramic), Bi for a fine particle (d~5 μm) is ~0.05, while for a large particle (d~100 μm) it is ~0.01. These are both <<1, implying that all (thermally-sprayed) particles remain isothermal during the process (since the thermal resistance of the particle interior is small relative to that of the interface). This will be even more strongly the case for metallic particles (higher Kp \[{h\_i}\Delta T(4\pi {(d/2)^2}) = c\left( {\frac{{{\rm{d}}T}}{{{\rm{d}}t}}} \right)\left( {\frac{4}{3}\pi {{(d/2)}^3}} \right)\] \[ ∴ \frac{{{\rm{d}}T}}{{{\rm{d}}t}} = \frac{{6{h\_i}\Delta T}}{{cd}}\] If ΔT (initial value) is ~1,000 K, and specific heat, c, is ~3 ×106 J m-3 K-1, then the heating rate is ~4 × 107 K s-1 for a fine particle and ~2 ×104 K s-1 for a coarse one. These are both high rates, but particle size clearly has a strong effect. With typical flight times of the order of a few tens of ms, even 100 μm particles will become heated, but perhaps only by a few hundred K. For this reason, large particles (>100 μm) are rarely used in thermal spraying and, particularly for materials with high melting points (eg ceramics), it's often necessary for particles to be relatively small (<~50 μm), as well as making the gas temperature as high as possible. The simulation below, in which these relationships are used to estimate both the particle temperature and its Stokes number on reaching the substrate, allows exploration of the conditions needed to ensure that a deposit is formed (ie that the particles will strike the substrate while molten), for some selected materials. Passage of particles through filter = Mechanical filtering (trapping particles in some sort of mesh or porous medium) is an obvious method of removing harmful particulate from a fluid (notably both air and water), although it is not really a suitable approach to obtaining or classifying powder fractions. (While it is sometimes possible to clean, or "regenerate", filters, it's not normally practicable to extract powder from them.) Key filtration issues relate to the twin (conflicting) requirements of trapping (fine) particulate, while avoiding substantial inhibition of the fluid flow. The latter may concern clogging, and the possibility of periodic removal of trapped material, but a fine filter, even when clean, may require a relatively high pressure drop across it to create the necessary flow rate. There is interest in filtration of many species from fluids, ranging from coarse inorganic suspensions to small dissolved ions. This is illustrated in the figure below, which shows some of the terms commonly used to denote different types of filtration and corresponding length scales. Small molecules and ions cannot be removed via mechanical entrapment and require precipitation or osmotic separation. However, provided a suitably fine permeable medium is available, mechanical filtration can be effective for very small species (down to ~1 nm), although this may be at the cost of very low fluid flow rates - see below. ![Terms used for filtration](images/filter_terms.jpg) The pressure drop (Δp) across a filter of thickness Δx, needed to generate a fluid flux through it of Q (m3 m‑2 s‑1) is dictated by ***Darcys law*** \[Q = \frac{\kappa }{\eta }\frac{{\Delta P}}{{\Delta x}}\]  where η (Pa s) is the viscosity of the fluid and κ (m2) is the ***specific permeability*** of the medium (filter).  Finer filters do, of course, tend to have lower permeabilities, leading to larger pressure drops and/or lower flow rates.  This equation can be used to explore specific filtration requirements.  There are expressions available for prediction of the permeability of a porous medium, such as the . In the simulation below, a flow situation can be set up in terms of applied pressure gradient (pressure drop and filter thickness), filter type (fibre or particle size and porosity level) and fluid type, both chosen from a small number of options.  (The DPF option for filter type is a Diesel Particulate Filter  -  see the following page.)  For each of these, the Carman-Kozeny equation is used to give a permeability, which is then employed to predict the fluid flux (represented as the velocity of markers  -  there is no depiction of particles becoming trapped in this simulation). Diesel particulate filters An important and demanding filtration requirement is that for automotive ***Diesel*** engine ***exhaust***, from which very fine (10-100 nm) ***carbon particles*** must be removed.  Such particulate is present in many combustion exhausts  -  ***candle flames*** are particularly rich in it  -  and the level in Diesel exhaust is relatively high.  The ***gas flow rate*** for a Diesel car is ~250 kg h-1, the exhaust pipe diameter is ~150 mm and a back-pressure above ~100 mbar across the filter would impair operation of the engine.  The filter surface area is raised via a ***honeycomb geometry*** of the type illustrated below (doi:10.1595/147106709x390977), so that the high flow rate creates only a fairly low flux through the filter wall, Q, of ~0.1 m3 m-2 s-1 (ie a gas velocity of ~0.1 m s-1), and the wall thickness, Δx, is kept low (~1 mm).  The Carman-Kozeny equation was used to create the figure below giving the ***pressure drop*** as a function of ***pore size***, based on the filter material having the structure of a set of parallel cylinders, with a porosity level of 50%.  The plot suggests that pore dimensions finer that a few microns will lead to an unacceptably high pressure drop and in fact DPFs are normally made by ***lightly sintering*** together relatively ***coarse particulate***, creating pores with dimensions of tens of microns. ![Fig_3-4a DPF schem](images/diesel_graph.jpg)  ![Fig_4-4b_DP(D)](images/diesel_grsph_info.jpg) This appears problematic for filtration of ***substantially sub-micron particles***.  Fortunately, the carbon particles in Diesel exhaust adhere well to each other, so that a ***network*** of them builds up fairly quickly in the pores.  When “clean”, however, the filtration efficiency is relatively low, and filters are repeatedly returned to this state, since they need to be “***regenerated***” every few hundred km (by ***injecting fuel into the exhaust***, so that the accumulated ***carbon particles are burnt off***).  This is illustrated in the figure below, which shows experimental data for the pressure drop across a DPF, as a function of mass gain during operation, and the initial filtration efficiency after regeneration.   ![DPS soot mass](images/6a_deltaP(soot_mass)s.jpg) ![Soot](images/6b_filter_eff(soot_load)s.jpg) In practice, there are several factors that influence the choice of material and production route for a DPF, including a need to be resistant to high temperatures and thermal shock.  . Powder consolidation by cold pressing and sintering = Powder processing is often an attractive alternative to casting or deformation processing in order to produce shaped components. There are many situations in which it is the only viable option - for example, if the material is difficult to melt, contain or deform. The powder is mixed with some sort of fugitive (usually polymeric) binder and moulded into the required shape, often by simply applying pressure at ambient temperature to a flexible (rubber) mould containing the mixture. The resultant "green compact" is then fired at high temperature (usually well below the melting temperature of the material), when the binder is driven off and the compact becomes consolidated as a result of sintering - see below. The powder may be just a single species, although sometimes "sintering aids" are added. Sintering often requires extensive diffusion, so the temperature usually needs to be relatively high (>~0.6Tm). The diffusion is driven by the resultant reduction in surface area. On a local scale, diffusion tends to reduce the local curvature of the free surface. This is illustrated in the figure below, where it can be seen that the diffusion can either occur within the surface or can be "internal" - ie through the lattice or via preferred routes such as grain boundaries or dislocations. This figure highlights an important point, which is that, while both types of diffusion cause growth of necks (via transport of material to regions of high surface curvature), raising the strength of the compact, only internal diffusion leads to densification (ie movement of the centres of particles towards each other). It's sometimes desirable to create strong compacts with relatively high porosity levels (eg for lubricated bearings). In such cases, conditions are sought that will favour surface diffusion over internal diffusion. ![Internal diffusion leading to neck growth](images/internal_diffusion_neck.jpg) The animation below shows schematic depictions of the processes of mould filling, cold pressing and the diffusion that causes neck growth (with or without densification). Also shown are representations of the process variants of liquid phase sintering, reactive sintering and hot isostatic pressing (HIP). Summary = You should now understand at least some of the basic principals involved in processing of powders.  This TLP does not cover all aspects of powder technology -  for example, there is nothing on production of powders, on their classification into different size ranges or on their safe handling.  Nevertheless, much of the underlying science, particularly that governing the dynamics of particles in fluid streams, has been covered here and you should be able to carry out simple calculations concerning sedimentation of powder particles in fluids and on whether they are likely to strike obstacles in a fluid stream  -  this is relevant, for example, to their removal by filtration and to the process of thermal spraying .  The significance of the Reynolds number, the Stokes number, the Biot number, Darcys law and the permeability of a porous medium are all described in the context of powder processing. Moreover, the procedures that are commonly used to produce artefacts from metal and ceramic powders are described, although there is no quantification of the sintering phenomena that take place during them.  Questions = ### Quick questions*You should be able to answer these questions without too much difficulty after studying this TLP. If not, then you should go through it again!*1. The Reynolds number is a measure of: | | | | | - | - | - | | | a | The ratio of inertial forces to viscous forces, and hence of the tendency for fluids to flow faster. | | | b | The ratio of inertial forces to viscous forces, and hence of the tendency for fluids to flow more slowly. | | | c | The ratio of inertial forces to viscous forces, and hence of the tendency for fluid flow to become chaotic. | | | d | The ratio of the speed of a fluid to its dynamic viscosity, and hence of the tendency for fluid flow to become chaotic. | 2. The terminal velocity of a particle falling under gravity in a fluid is: | | | | | - | - | - | | | a | Proportional to the square of its size. | | | b | Independent of its shape. | | | c | Independent of the fluid viscosity. | | | d | Independent of temperature. | 3. The Stokes number is a measure of: | | | | | - | - | - | | | a | The ratio of the velocity of a particle in a fluid to the velocity of an obstacle in its path, and hence of the likelihood of it striking the obstacle. | | | b | The ratio of the velocity at which a particle in a fluid is approaching an obstacle, to its terminal velocity in the fluid, and hence of the likelihood of it striking the obstacle. | | | c | The ratio of the inertial force acting on a particle moving towards an obstacle, to the drag force exerted by the fluid, and hence of the likelihood of it striking the obstacle. | | | d | The time needed for a particle to change its direction of motion, relative to that involved in passing an obstacle in its path, and hence of the likelihood of it striking the obstacle. | 4. The (specific) permeability of a porous medium is: | | | | | - | - | - | | | a | The proportionality constant relating the flux of a fluid through the medium to the pressure gradient divided by the fluid viscosity. | | | b | Directly proportional to its porosity. | | | c | The proportionality constant relating the average velocity of a fluid flowing through the medium to the pressure gradient divided by the fluid viscosity. | | | d | Independent of the specific surface area of the medium. | 5. The sintering process that leads to a relatively loose assembly of powder particles being transformed into a material with acceptable strength and stiffness: | | | | | - | - | - | | | a | Always involves removal of most or all of the porosity. | | | b | Requires the application of imposed pressure at high temperature. | | | c | Is driven by reduction of the surface area within the powder assembly. | | | d | Involves flow of a liquid and/or chemical reactions. |### Deeper questions*The following questions require some thought and reaching the answer may require you to think beyond the contents of this TLP.*6. It is proposed that a new type of Diesel Particulate Filter (DPF), with a very high efficiency for extracting fine carbon particles, could be made by creating a bonded assembly of fine alumina fibres (diameter ~0.1 μm).  Use the Carman-Kozeny equation to estimate the expected permeability of such a material, assuming that it will be 80% porous.  If the maximum gas flux through the DPF is 500 m3 hr-1, its surface area is ~1 m2 and its wall thickness is ~1 mm, estimate the pressure drop across it and comment on whether this value is likely to be acceptable for satisfactory operation of the engine.  [The viscosity of the exhaust gas at the temperature concerned is about 3 10-5 Pa s.] 7. It is proposed that thermal spraying will be used to create an alumina (MPt. ~2040˚C) coating on a metal artefact, using a combustion torch with a flame temperature of 2300˚C.  The alumina powder to be used has an average particle size of about 20 µm and the heat capacity of alumina c, is ~3 106 J m-3 K-1.  The stand-off distance to be used is 400 mm and the gas velocity is 200 m s-1.  The interfacial heat transfer coefficient under these conditions is estimated to be 10 kW m-2 s-1.  Assuming that injected particles reach the gas velocity very quickly, estimate their temperature at impact and hence decide whether the spraying process is likely to be successful. Going further = ### Books Ceramic Processing and Sintering, MN Rahaman, Marcel Dekker, New York (2003) ISBN 0-8247-0988-8. Introduction to the Principles of Ceramic Processing, JS Reed, Wiley, New York (1988) ISBN 0-471-84554-X. ### Websites * European Powder Metallurgy Association site: * Wikipedia entry on ceramic engineering:  
Aims On completion of this TLP you should: * Understand the difference between pyroelectrics and ferroelectrics. * Understand the main uses of pyroelectrics. * Understand the difficulties involved in using pyroelectrics, and how these issues are overcome. Before you start This TLP follows on well from those on ferroelectrics and piezoelectrics. It would be useful to read this after those. Introduction Pyroelectrics are the bridge between ferroelectrics and piezoelectrics – a Venn diagram showing the interrelations is available . They possess a spontaneous polarisation which is not necessarily switchable by an electric field. If their polarisation is electrically switchable then they are ferroelectric, a property that may be exploited e.g. for data storage. Pyroelectrics fill an entirely separate niche. The pyroelectric effect has been known of for a very long time. The Greek philosopher Theophrastus first noted (in approximately 400 B.C.) that Tourmaline would attract straw and bits of wood when heated. This was due to the fact that the changing the temperature produces surface charges that are capable of attracting other charged materials. Naturally, how this occurred has only recently been discovered, but it is interesting to note how early on the effect was documented. Polarisation A pyroelectric material possesses a spontaneous dipole moment, interpreted via the ionic positions. This dipole moment, when normalised by volume, yields a polarisation. Whether a given sample with local dipole moments possesses a net dipole moment depends on domain configurations, which in turn depend on the electrical history of the sample. This polarisation can change when a stress is applied to the material, as pyroelectrics are a sub-set of piezoelectrics. But if the material is pyroelectric and not also ferroelectric then the polarisation will not reverse under the application of an electric field. This is because it will break down first, i.e. the coercive field exceeds the breakdown field. If the material is ferroelectric the the coercive field is smaller than the breakdown field. In other words, ferroelectrics are a subset of pyroelectrics. Whether or not a material can be pyroelectric or ferroelectric depends upon whether the point group it belongs to is polar, i.e. whether there is at least one direction along which no point group symmetry element forces both sides of the crystal to be the same. The polar point groups are: 1, 2, m, mm2, 3, 3m, 4, 4mm, 6, 6mm. (Point groups will not be covered in this TLP.) Note that in some of these point groups, e.g. class 4, the polar axis is unique. Variation of Polarisation with Temperature With a pyroelectric, the polarisation *P* will typically decrease when its temperature is raised. This is because the increasing disorder results in a reduced segregation of charge, and so the arising dipoles are lessened in magnitude. The drop off in polarisation can be seen on the next slide. Here the polarisation drops off to the Curie point, where it is zero. \[\underline {\Delta P} = \underline p \;\Delta T\] where *p* = pyroelectric coefficient (C m-2 K-1). The pyroelectric coefficient is a vector, with three components: \[\Delta {P\_i} = {p\_i}\Delta T\quad i = 1,2,3\]  Typically, however, the electrodes which measure this are placed along a principal crystallographic direction, and therefore, the coefficient is often measured as a scalar, which is typically negative to represent a polarization falling with increasing temperature. The pyroelectric coefficient measured under an applied field ***E*** is liable to differ from its true value as explained below. When an electric field, ***E*** is applied to a polar material, a moment arises, and the total response ***D*** (as measured as a charge per unit area on metallic plates either side of the pyroelectric) is expressed as: \[D = \varepsilon \;{\rm{E}} + {P\_{\rm{s}}}\] where ε = electrical permittivity of the pyroelectric and *P*s = spontaneous polarisation. This means that: \[\frac{{\partial D}}{{\partial T}} = \frac{{\partial {P\_s}}}{{\partial T}} + {\rm{E}}\frac{{\partial \varepsilon }}{{\partial T}}\] Since the pyroelectric coefficient *p*g relates changes in ***D*** to changes in *T*, we have a ‘generalised pyroelectric coefficient, given by: \[{p\_{\rm{g}}} = \frac{{\partial D}}{{\partial T}} = \frac{{\partial {P\_{\rm{s}}}}}{{\partial T}} + {\rm{E}}\frac{{\partial \varepsilon }}{{\partial T}} = p + {\rm{E}}\frac{{\partial \varepsilon }}{{\partial T}}\] which includes a factor based on the permittivity of the material being temperature dependent. This is what is measured. The true pyroelectric coefficient is given by: \[p = \frac{{\partial {P\_{\rm{s}}}}}{{\partial T}}\] as this defines the variation of the spontaneous polarisation with *T*. The effect due to changes in permittivity can sometimes be comparable in magnitude to true pyroelectricity, and can also be seen above the Curie point. Behaviour around the Curie point On warming towards the Curie point, above which the spontaneous polarisation of a pyroelectric disappears, the pyroelectric coefficient typically increases as the temperature dependence of the polarization becomes stronger. The polarisation also depends on the order of the phase transition, covered in more detail . The diagram below demonstrates how the polarisation changes near the Curie point. ![](images/order_near_curie_point.jpg) For second order transitions, the pyroelectric coefficient is observed to be large. Materials which undergo first order transitions cannot be used for applications as they undergo hysteresis, such that the transition occurs at different temperatures depending on whether the material is being heated or cooled. This makes the Curie point unreliable. These factors mean that the pyroelectric is typically used at temperatures much lower than the Curie point. This results in pyroelectric coefficients being lower (as they are directly related to the temperature), but less variable with ambient temperature. The Direct and Indirect Effect As a pyroelectric material is also piezoelectric, thermal expansion can result in an indirect contribution to changes of polarisation, in addition to the direct effect discussed previously. Example Pyroelectric Materials Like other dielectric materials, the predominant pyroelectric structure is the perovskite. Two examples are: 0.75Pb(Mg1/3-Nb2/3)O3-0.25PbTiO3, which has a pyroelectric coefficient of −1300 μC m−2 K−1 as a single crystal. It is more commonly referred to as PMN-PT. | | | | - | - | | | Your browser does not support the video tag. Rotating crystal structure of PbMg/NbO3 | and Ba0.65Sr0.35TiO3, which has a pyroelectric coefficient of −7570 μC m−2 K−1. It is also known as BST. | | | | - | - | | | Your browser does not support the video tag. Rotating crystal structure of Ba/SrTiO3 |However, there are other types of pyroelectric material which do not have a perovskite structure. Two of the more common of these will also be considered. Triglycine sulphate = This has the formula (NH2CH2COOH)3H2SO4, and variations on this have given some of the highest pyroelectric coefficients. The structure is shown below: ![Crystal structure of triglycine sulphate](images/TGS.gif) The glycine (NH2CH2COOH) groups are polar, but the most important is the glycine 1 group, as the reversal of the polarization of the material is associated with a rotation about the ‘a axis of this group. This changes the crystal into a mirror image of itself. In either state, below the Curie point, (~50oC) the crystal is point group 2, with a polar axis along the ‘b axis. Typically, the crystal is grown from aqueous solution, with whatever modifications are required. Modifications may include deuteration or more esoteric ideas such as substituting glycine for other amino acid style groups. These have altered properties such as Curie temp., or the thermal stability of the pyroelectric coefficient. It has a pyroelectric coefficient of −5.5 × 10−4 C m−2 K−1, measured at 30oC. It is useful for the pyroelectric ‘vidicon, a device used as a camera, for thermal imaging. This is used by disaster teams to find people trapped under rubble, etc. Polyvinylidene fluoride = This is a carbon backbone polymer, with repeat unit (-CH2-CF2-). It takes up several different conformations, which possess different varying properties. The trans-gauche configuration results in molecules stacked so as to produce a non-polar unit cell, whereas the all-trans configuration results in a polar unit cell. Possible configurations: ![The arranged crystal structure of PVDf](images/polymer_pyroelectric.jpg) Unit cells: ![The arranged crystal structure of PVDf](images/polymer_pyroelectric_unit_cell.jpg) PVDF does not possess a particularly impressive pyroelectric coefficient, (−0.27 × 10−4 C m−2 K−1) but it is highly useful in that large area thin films can very easily be made. These structures are important, as they generate a large charge which can be easily detected. This overcomes most of the other failings of PVDF. The large area thin film is most often used for measuring the energy of laser beams in laboratories, as a large area is required to detect the entire beam. Since it is also possible to very cheaply make small area thin films, PVDF is easily made use of in burglar alarms. (This information sourced from; Pyroelectric devices and materials (review article), R W Whatmore, *Rep.Prog. Phys.* **49** (1986) 1335-1386, which provides plenty of good information on pyroelectrics.) Application of a Pyroelectric-Infrared detection There are problems with this situation. As discussed in the previous section, there is the possibility of an indirect effect, with the thermal expansion of the detector causing a polarisation to develop by piezoelectricity. This produces ‘noise of a sort, and can mask the signal generated by the target object. There is also the possibility of external stresses being applied. This is a problem, as it means the detector does not work as well as it should. These problems are typically counteracted by the use of a second pyroelectric component, i.e. a reference element. See below: ![The use of two pyroelectric plates to cancel out the effects of thermal stress](images/compensation_of_infrared_detectors.jpg) Pollutant Control = Pollution is a very important issue. The reduction of pollution and greenhouse gases has become a major priority for parliament. We need to be able to monitor levels of pollution, in order to monitor the situation. This is where the pyroelectric comes in. Pyroelectrics, as shown previously, are excellent detectors of IR radiation. Therefore, they make perfect devices for testing the level of IR radiation that passes through a gas sample. As the wavelength at which a gas absorbs usually uniquely identifies that gas, this makes an excellent means of detection. See below for the mechanics of this: Summary = As seen, the pyroelectric has somewhat limited use. However, it is a rather important type of material, due to the position it holds between piezoelectrics and ferroelectrics. It must also be considered that while it does not have a wide range of uses, its use in the motion detector (see the ‘ section) is very common, and as such it can be considered important to understand it. Questions = ### Quick questions*You should be able to answer these questions without too much difficulty after studying this TLP. If not, then you should go through it again!*1. When the temperature of a pyroelectric is changed, where does the resultant charge imbalance arise? | | | | | - | - | - | | | a | On the surface with one end positive and one end negative. | | | b | In the centre, at a point of mismatched domains. | | | c | On the surface with both ends positive. | | | d | On the surface with both ends negative. | 2. Which of these is not a basic requirement for a pyroelectric? | | | | | - | - | - | | | a | A temperature dependent polarisation. | | | b | A spontaneous polarisation. | | | c | A polar structure. | | | d | A simple cubic structure. | 3. What type of variable is polarisation? | | | | | - | - | - | | | a | Scalar. | | | b | Vector. | | | c | Constant. | | | d | Symmetric. | 4. Direct electrical measurement can necessarily identify what? | | | | | - | - | - | | | a | True pyroelectric coefficient. | | | b | Dipole moment. | | | c | Generalised pyroelectric coefficient. | | | d | Domain structure. | Going further = ### Books A.J. Moulson & J.M. Herbert, Electroceramics: Materials, Properties, Applications (2nd Ed., Wiley, 2003) J.F. Nye, Physical Properties of Crystals (Oxford University Press) R.E. Newnham, Structure-Property Relations (Springer-Verlag, 1975)  
Aims On completion of this TLP you should: * understand the concept of Raman scattering and how it can be used in spectroscopy * be aware of some advantages of Raman spectroscopy * know about some variations on Raman spectroscopy. Before you start There are no special prerequisites for this TLP. Introduction A large variety of spectroscopic techniques are available for the analysis of materials and chemicals. Among these is Raman spectroscopy. This relies on Raman scattering of light by a material, where the light is scattered inelastically as opposed to the more prominent elastic Rayleigh scattering. This inelastic scattering causes shifts in wavelength, which can then be used to deduce information about the material. Properties of the material can be determined by analysis of the spectrum, and/or it may be compared with a library of known spectra to identify a substance. Since the discovery of Raman scattering in the 1920s, technology has progressed such that Raman spectroscopy is now an extremely powerful technique with many applications. Raman scattering Raman scattering (sometimes called the Raman effect) is named after Indian physicist who discovered it in 1928, though predictions had been made of such an inelastic scattering of light as far back as 1922. The importance of this discovery was recognised even then, and for his observation of this effect Raman was awarded the 1930 Nobel Prize in Physics. This was and remains the shortest time from a discovery to awarding of the Prize. In fact Raman was so confident that he arranged his travel to Stockholm several months in advance of the recipients being announced! This confidence seems quite justified, given that within a year and a half of his discovery, more than 150 papers mentioning the effect had been published. Since then Raman scattering has given rise to a number of important technologies, and foremost among these is Raman spectroscopy. Most light passing through a transparent substance undergoes Rayleigh scattering. This is an elastic effect, which means that the light does not gain or lose energy during the scattering. Therefore it stays at the same wavelength. The amount of scattering is strongly dependent on the wavelength, being proportional to λ-4. (It is this fact that makes the sky blue, the shorter wavelength blue components in the Suns light are Rayleigh scattered in the atmosphere far more than the longer wavelengths. Blue light is then seen coming from all over the sky. The scattering of blue light from its direct path from the Sun also causes the Sun itself to appear yellow.) In *Rayleigh* scattering a photon interacts with a molecule, polarising the electron cloud and raising it to a “virtual” energy state. This is extremely short lived (on the order of 10-14 seconds) and the molecule soon drops back down to its ground state, releasing a photon. This can be released in any direction, resulting in scattering. However since the molecule is dropping back to the same state it started in, the energy released in the photon must be the same as the energy from the initial photon. Therefore the scattered light has the same wavelength. *Raman* scattering is different in that it is inelastic. The light photons lose or gain energy during the scattering process, and therefore increase or decrease in wavelength respectively. If the molecule is promoted from a ground to a virtual state and then drops back down to a (higher energy) vibrational state then the scattered photon has less energy than the incident photon, and therefore a longer wavelength. This is called *Stokes scattering*. If the molecule is in a vibrational state to begin with and after scattering is in its ground state then the scattered photon has more energy, and therefore a shorter wavelength. This is called *anti-Stokes scattering*. ![Transitions fro Rayleigh, Stokes and anti-Stokes scattering](images/transitions.gif) Three different forms of scattering Only about 1 in 107 photons undergo Stokes Raman scattering and so this is usually swamped by the far more prominent Rayleigh scattering. The amount of anti-Stokes scattering is even less than this. . The shift due to the Raman effect is determined by the spacing between the and the ground states i.e. by the phonons of the system. The Stokes and anti-Stokes scattered light will be shifted an equal distance on opposite sides of the Rayleigh scattered light. Therefore the spectrum is symmetrical about the wavelength of light used, apart from the difference in intensities. Normally in Raman spectroscopy only the Stokes half of the spectrum is used, due to its greater intensity. In one of Ramans experiments demonstrating inelastic scattering he used light from the Sun focused using a telescope to obtain a high intensity light. This was passed through a monochromatic filter, and then through a variety of liquids where it underwent scattering. After passing through these he observed it with a crossed filter that blocked the monochromatic light. Some light was seen passing through this filter, which showed that its wavelength had been changed. ![Raman's experiment](images/raman_s_expt.gif) Comparison with other types of spectroscopy = It is instructive to compare the process of Raman scattering with some other spectroscopic techniques. In the commonly used infrared absorption spectroscopy, infrared light excites certain vibrational frequencies of molecules and is absorbed by them, not re-emitted. This gives an absorption spectrum, with bands at characteristic . Other absorption techniques use higher energy radiation (e.g. ultraviolet) and raise electrons to an excited state. Fluorescence occurs when light (often UV) is incident on a molecule and promotes an electron to an excited state. The molecule is also vibrating. Firstly it relaxes from its vibrational state, dissipating this energy (normally as heat). Then when it drops back down to the ground state, the photon released has less energy than the incident photon. The increased wavelength often means that the light is now in the visible region. This is how fluorescent lighting works, by ionisation of mercury to produce UV light, which is then absorbed by a fluorescent coating and re-radiated as visible light. Fluorescence can also be used for spectroscopy. Below is an overview of some different interactions of light with a molecule. Raman active modes The Raman shift depends on the energy spacing of the molecules modes. However not all modes are “Raman active” i.e. not all appear in Raman spectra. For a mode to be Raman active it must involve a change in the , α of the molecule i.e.   \({\left( {\frac{{{\rm{d}}\alpha }}{{{\rm{dq}}}}} \right)\_{\rm{e}}}\)\( \ne 0\) where q is the normal coordinate and e the equilibrium position. This is known as spectroscopic selection. Some vibrational modes (phonons) can cause this. These are generally the most important, although electronic modes can have an effect, and rotational modes may be observed in gases at low pressure. The spectroscopic selection rule for infrared spectroscopy is that only transitions that cause a change in dipole moment can be observed. Because this relates to different vibrational transitions than in Raman spectroscopy, the two techniques are complementary. In fact for centrosymmetric ( ) molecules the Raman active modes are IR inactive, and vice versa. This is called the rule of mutual exclusion. The origin of Stokes and anti-Stokes scattering due to vibrational modes can be explained in terms of the oscillations involved. The polarisability (α) of the molecule depends on the bond length, with shorter bonds being harder to polarise than longer bonds. Therefore if the polarisability is changing then it will oscillate at the same frequency that the molecule is vibrating (νvib). Polarisability of the molecule: \[\alpha = {\alpha \_0} + {\alpha \_1}\sin (2\pi {\nu \_{{\rm{vib}}}}t)\] There is an external oscillating electric field from the photon, with a frequency νp: *E = E*0 sin( 2 *π ν*p *t*) Therefore the induced dipole moment is:     \[{p\_{{\rm{ind}}}} = \alpha E = ({\alpha \_0} + {\alpha \_1}\sin (2\pi {\nu \_{{\rm{vib}}}}t)) \times {E\_0}\sin (2\pi {\upsilon \_{\rm{p}}}t)\] Using the trigonometric identity: \[\sin A \times \sin B = \frac{{\cos (A - B) - \cos (A + B)}}{2}\] The induced dipole moment is: \[{p\_{{\rm{ind}}}} = {\alpha \_0}{E\_0}\sin (2\pi {\upsilon \_{\rm{p}}}t) + \frac{{{\alpha \_0}{E\_0}}}{2}\cos (2\pi ({\upsilon \_{\rm{p}}} - {\nu \_{{\rm{vib}}}})t) - \frac{{{\alpha \_0}{E\_0}}}{2}\cos (2\pi ({\upsilon \_{\rm{p}}} + {\nu \_{{\rm{vib}}}})t)\] A dipole moment oscillating at frequency ν results in a photon of frequency ν. Therefore in this case there are photons scattered at frequency νp (Rayleigh scattering), νp – νvib (Stokes scattering) and νp + νvib (anti-Stokes scattering). Of course if the polarisability is not changing then the dipole moment will simply oscillate at frequency νp, and only Rayleigh scattering will occur. This is the origin of the spectroscopic selection rule for Raman scattering. The Raman (and IR) activity of more complicated molecules can be determined using their symmetry and group theory, which goes beyond the scope of this TLP. There are links to more information in the section. The above is based on single molecules in a gas, and hence not interacting with neighbours. In materials science Raman techniques are more often used for solids, where molecules cannot be taken individually. In crystalline materials vibrations are quantised as *phonons*, modes determined by the crystal structure. The spectroscopic selection rule still applies, i.e. only phonons with a change in polarisability are Raman active. Phonons are generally of a lower frequency than the vibrations in gases, so result in lower wavenumber shifts. Structural information can therefore be determined from these shifts. Crystal orientation can also be determined from the polarization of the scattered light. Method (dispersive Raman spectroscopy) ![Portable Raman spectrometer, as used at NASA](images/portable_Raman_spec_small _NASA.jpg) Portable Raman spectrometer, as used at NASA () Raman used light from the Sun focused through a telescope to achieve a high enough intensity in his scattered signal. Modern spectrometers use both improved sources and more sensitive detectors to obtain better results. Early spectrometers used mercury arc lamps as a light source. Now lasers are normally employed due to their high intensity, single wavelength and coherent beam. Initial spectrometers used photographic plates to detect the light. The advent of more sensitive photomultiplier tubes led to their widespread use, allowing the data to be collected and manipulated electronically. However they had the disadvantage of only being able to count one wavelength at a time. Modern spectrometers use (CCDs) that combine the advantages of the previous techniques, being highly sensitive, electronic, and able to measure a whole spectrum at once. The chief difficulty in Raman spectroscopy is preventing overlapping of the Raman signal by stray light from the far more intense Rayleigh scattering. Interference notch filters are commonly used, which filter out wavelengths within approximately 100 cm-1 of the laser wavelength. However these are obviously of no use for studying low Raman shifts (e.g. those produced by low frequency phonons) within this region. One improvement is to use multiple stages for dispersion, with either double or triple spectrometers. Holographic diffraction gratings can be used which result in much less stray light than ruled ones. … A simplified diagram of a Raman spectrometers operation is shown below. ![Schematic of a sprectrometer](images/spectrometer_schematic.gif) An important consideration in Raman spectroscopy is the spectral resolution, the ability to resolve features within the spectrum. There are two ways to increase spectral resolution, by increasing the focal length or by changing the grating used to disperse the spectrum. Doubling the focal length approximately doubles the spectral resolution. Similarly doubling the density of lines on the grating results in twice the dispersion and twice the spectral resolution. However higher density gratings have restricted working ranges e.g. a grating with 2000 lines per mm cannot be used for infrared work. The choice of wavelength used is important, and can range from the near infrared into the ultraviolet. As already mentioned the choice may be limited by the density of the diffraction grating. In addition for materials that show fluorescence it is vital to choose a longer wavelength that will minimise fluorescence, as otherwise this will swamp the weak Raman effect. However, higher energy ultraviolet lasers can be useful for penetrating certain samples where fluorescence is not a problem. Another consideration is that visible lasers are generally easier to work with. These varying factors mean that many spectrometers have a number of lasers, which can be switched as appropriate. Of course different lasers will require different filters to remove the Rayleigh scattered light. Raman microspectroscopy = Raman spectroscopy can also be used for microscopic analysis and imaging. There are two main methods: direct imaging and hyperspectral imaging (chemical imaging). Direct imaging involves examining the whole sample for characteristic shifts e.g. of a single compound. This generates an image showing the distribution of that compound. In hyperspectral imaging Raman spectra are taken at points across the sample, so that multiple compounds and their distributions can be identified. The disadvantage is that with a spectrum taken for every pixel, this requires a lot of computing power and storage space. The instrument in the Department of Materials Science & Metallurgy, University of Cambridge, is a typical microspectrometer, manufactured by Renishaw. An interactive diagram of this is shown below, to give a feel for the components and operation. Hover over components to see a description of them, and click 'Play' to see the path that light takes through the spectrometer. Alternative techniques ### Fourier transform (FT) Raman Unlike dispersive Raman spectroscopy, which obtains a spectrum by diffraction of the different wavelengths, Fourier transform (FT) Raman creates an interference pattern that can be analysed to recover the spectrum. It has the advantage of being faster than the dispersive technique, but is limited in resolution and choice of laser wavelength. ### Stimulated Raman Stimulated Raman scattering is a non-linear phenomenon that results in a much larger Raman signal than standard scattering (4 to 5 orders of magnitude greater). It can be triggered with a strong laser pulse. Only the strongest Raman active mode is excited at first, but scattering from it can be strong enough to excite the second mode, which in turn can excite the third and so on in a cascade effect. ![Diagram of stimulated Raman spectrum](images/stimulated.gif) Diagram of Stimulated Raman ### Resonance Raman Another effect that greatly increases the magnitude of scattering is Resonance Raman (RR). This occurs when the energy of the incident radiation is close to that of an electronic excitation energy (i.e. the band gap). Tunable lasers can be used to achieve this. The Raman scattering of vibrational modes around this excited state is then greatly enhanced by resonance effects. ### Surface Enhanced Raman Spectroscopy (SERS) Surface Enhanced Raman Scattering (SERS) is a process that can occur when samples are adsorbed on gold or silver surfaces. It results in a vast increase in the Raman effect, and is therefore useful for spectroscopy. Though the mechanism is not very well understood, it is believed to be a combination of chemical enhancement of polarisability by bonds formed between the sample and the surface, and electromagnetic resonance of small gold or silver particles. This effect can be combined with Resonance Raman for Surface Enhanced Resonance Raman Spectroscopy (SERRS), which results in very strongly enhanced signals, up to 1014 times more intense than standard Raman scattering. ### Coherent Anti-Stokes Raman Spectroscopy (CARS) Coherent Anti-Stokes Raman Spectroscopy (CARS) is a technique involving two lasers. One is of a fixed frequency ν1 and the other is tunable to a lower frequency ν2. When combined they result in coherent radiation at frequency ν' = 2 ν1 - ν2 along with a number of other frequencies. If there is a Raman active mode with characteristic frequency νm, then when the second laser is tuned such that ν2 = ν1 - νm then the coherent emission emerges in a high intensity narrow beam with frequency ν' = 2 ν1 - ( ν1 - νm) =  ν1 + νm This is the anti-Stokes frequency, hence the name. ![Cars](images/cars.gif) Diagram of Coherent Anti-Stokes Raman Spectroscopy ### Raman optical activity – compares polarisations Raman optical activity is a technique which compares the different polarisations of Raman scattered light from chiral molecules such as those found in biology, in order to determine more about their structure. This can also be combined with amplifying techniques such as resonance and surface enhancement. Advantages and disadvantages ### Advantages... Raman spectroscopy has a number of advantages over other analysis techniques. * Can be used with solids, liquids or gases. * No sample preparation needed. For infrared spectroscopy solids must be ground into KBr pellets or with nujol to form a mull. * Non-destructive * No vacuum needed unlike some techniques, which saves on expensive vacuum equipment. * Short time scale. Raman spectra can be acquired quickly. * Can work with aqueous solutions (infrared spectroscopy has trouble with aqueous solutions because the water interferes strongly with the wavelengths used) * Glass vials can be used (unlike in infrared spectroscopy, where the glass causes interference) * Can use down fibre optic cables for remote sampling. ### ...and disadvantages * Cannot be used for metals or alloys. * The Raman effect is very weak, which leads to low sensitivity, making it difficult to measure low concentrations of a substance. This can be countered by using one of the alternative techniques (e.g. Resonance Raman) which increases the effect. * Can be swamped by fluorescence from some materials. Applications There are a huge number of applications of Raman spectroscopy. Below are a few notable examples. ### Measuring/mapping stress Raman spectroscopy can be used to measure stress and strain in materials. Tensile strain increases the length of the bonds and the tension in them, hence changing the frequency of the phonons. It therefore causes a shift in the observed Raman bands towards lower wavenumbers. ### Forensics, explosives/drugs detection ![Photo of Raman integrated tunable sensor](images/RAMITS.jpg) Photo of Raman integrated tunable sensor from . (See ) Advances in technology have led to much smaller spectrometers, which are moving from the laboratory bench towards handheld devices that can be used for analysis in the field. They may be linked to a library of spectra, and can be used by law enforcement and customs officials to detect explosives, drugs and other chemicals. They are also useful for quickly identifying possibly hazardous materials e.g. after a spillage. Pictured is a Raman integrated tunable sensor (RAMITS) developed by the US government. It has a probe coated with silver nanoparticles, which allow Surface Enhanced Raman Spectroscopy, boosting the signal. The instrument is handheld and battery powered. ### Process monitoring Raman spectroscopy is a non-destructive process, and can be used to monitor industrial processes. The speed of analysis means that it can give almost real-time information. Another advantage is that the light to be monitored can be sent down fibre-optics, so that the Raman equipment can be located some distance away from the actual processing. ### Uncovering artistic techniques ![Photo of Book of Kells](images/bookofkells.jpg) A page from the book of Kells - in the public domain from As well as monitoring state of the art processes, Raman spectroscopy is being used to uncover the secrets of ancient artefacts. Scientists at Trinity College in Dublin are using Raman spectroscopy to examine the famous , an illustrated manuscript dating from the 9th century. They hope to determine the composition and origins of the paper, inks and pigments used, which will tell them about techniques used and trade routes of the age. ### Life on Mars Raman spectroscopy could also be used to search for life on Mars. Modern Raman technology has been miniaturised to the point that a small spectroscope will be carried on a future mission to the planet. The instrument will be used to look for evidence of life and/or life supporting conditions either in the present or the distant past, as well as more general analysis of the Martian surface. Similar instruments could be featured on missions to other potential sites of life such as Europa or Callisto. ### Carbon nanotubes Because of their structure, carbon nanotubes can be made to resonate with light. They may resonate with either the incident wavelength, or Raman scattered wavelengths. Resonance can also occur for a number of different modes. Some of the most important are the radial breathing mode, the disorder mode and the high energy mode. Observations of these can be used to determine important properties of the nanotubes, such as their diameter and strain. Raman spectroscopy is one of the easiest ways of measuring these vital properties. Summary = Raman scattering is the inelastic scattering of light. It is caused by light interacting with molecules that have a changing polarisability, often due to vibrations. It forms the basis of Raman spectroscopy, where the shifts in wavelength are used to determine modes of a sample, which can be solid, liquid or gas. These modes can be vibrational (e.g. phonons), rotational or other low frequency modes. Raman spectroscopy is an important technique that is now used in a wide variety of applications. Questions = ### Quick questions*You should be able to answer these questions without too much difficulty after studying this TLP. If not, then you should go through it again!*1. Which type of scattering results in a longer wavelength than the incident light? | | | | | - | - | - | | | a | Rayleigh | | | b | Stokes | | | c | Anti-Stokes | 2. Which type of scattering is the strongest? | | | | | - | - | - | | | a | Rayleigh | | | b | Stokes | | | c | Anti-Stokes | 3. Which of these properties must change for a mode to be Raman active? | | | | | - | - | - | | | a | Volume | | | b | Dipole moment | | | c | Polarisability | 4. Sulphur hexafluoride (SF6) is centrosymmetric. Which of these statements is true? | | | | | | - | - | - | - | | Yes | No | a | SF6 has an inversion centre. | | Yes | No | b | SF6 obeys the rule of mutual exclusion. | | Yes | No | c | Raman active modes will be IR active also. | | Yes | No | d | IR active modes will be Raman inactive. | | Yes | No | e | Modes will be Raman active if they involve a change in dipole moment. | 5. Can these alternative techniques be used to give a stronger signal than normal Raman spectroscopy? | | | | | | - | - | - | - | | Yes | No | a | Resonance Raman | | Yes | No | b | Fourier transform Raman | | Yes | No | c | Coherent Anti-Stokes Raman Spectroscopy | | Yes | No | d | Stimulated Raman | | Yes | No | e | Raman optical activity |### Deeper questions*The following questions require some thought and reaching the answer may require you to think beyond the contents of this TLP.*6. What advantages does Raman spectroscopy have for process monitoring? 8. Calculate the wavenumber shift for the vibrational mode of Cl2, given that the force constant k for the bond is 3.23 N cm-1. Going further = ### Books * Cardona M, *Light Scattering in Solids (Topics in Applied Physics Volume 8)*, Springer-Verlag, 1975. ### Websites * - including a biography of Raman, the 1930 Nobel Prize presentation speech and his Nobel lecture. * * – a guide to the cause of Raman scattering.  
Aims On completion of this TLP you should: * understand what a reciprocal lattice is * understand the construction of a reciprocal lattice from a known real lattice * understand the importance of reciprocal space in planning experiments and interpreting diffraction data Before you start To understand the application to diffraction you will find it helpful to complete the TLPs on and first. To understand the construction and results of the reciprocal lattice you will need to first understand . Introduction The concept of reciprocally has been introduced in the TLP, and in the TLP within the Bragg and Scherrer equations. This inverse scaling between real and reciprocal space is based on Fourier transforms. Josiah Willard Gibbs first made the formalisation of reciprocal lattice vectors in 1881. The reciprocal vectors lie in “reciprocal space”, an imaginary space where planes of atoms are represented by reciprocal points, and all lengths are the inverse of their length in real space. In 1913, P. P. Ewald demonstrated the use of the Ewald sphere together with the reciprocal lattice to understand diffraction. It geometrically represents the conditions in reciprocal space where the Bragg equation is satisfied. Later in the TLP, we will formalise the relationship between the real lattice vectors and unit cell and the reciprocal lattice, show the construction of the Ewald sphere and demonstrate some if its uses. Reciprocal space The animation below shows the relationship between the real lattice and the reciprocal lattice. Note that this 2D representation omits the **c\*** vector, but that it follows the same rules as **a\*** and **b\***. The key things to note are that: * The reciprocal lattice has reciprocal vectors **a\*** and **b\***, separated by the angle γ\*. * **a\*** is perpendicular to the (100) planes, and equal in magnitude to the inverse of d100. * Similarly, **b\*** is perpendicular to the (010) planes and equal in magnitude to the inverse of d010. * γ and γ\* will sum to 180º. Due to the linear relationship between planes (for example, d200 = ½ d100 ), a periodic lattice is generated. In general, the periodicity in the reciprocal lattice is given by  \({\rho \_{hkl}}^ \*\) = \(\frac{1}{{{d\_{hkl}}}}\) In vector form, the general reciprocal lattice vector for the (h k l) plane is given by \({s\_{hkl}}\) = \(\frac{{{{\rm{n}}\_{hkl}}}}{{{d\_{hkl}}}}\) where **n**hkl is the unit vector normal to the (h k l) planes. This concept can be applied to crystals, to generate a reciprocal lattice of the crystal lattice. The units in reciprocal space are Å-1 or nm-1 Mathematical representation of reciprocal lattice = We want reciprocal lattice vectors such that the reciprocal vector is the inverse in magnitude of the real vector and is normal to the planes separating the original vector. So, $$\left| {{\bf{a}}\*} \right| = {1 \over {{d\_{100}}}} = {1 \over {\left| {\bf{a}} \right|\cos (\gamma - {\pi \over 2})}}$$ and $${{{\bf{a}}\*} \over {\left| {{\bf{a}}\*} \right|}} = {{{\bf{b}} \times {\bf{c}}} \over {\left| {{\bf{b}} \times {\bf{c}}} \right|}}$$ Therefore, $${\bf{a}}\* = {{{\bf{b}} \times {\bf{c}}} \over {{\bf{a}}.{\bf{b}} \times {\bf{c}}}}$$and similarly: $${\bf{b}}\* = {{{\bf{c}} \times {\bf{a}}} \over {{\bf{a}}.{\bf{b}} \times {\bf{c}}}}$$ $${\bf{c}}\* = {{{\bf{a}} \times {\bf{b}}} \over {{\bf{a}}.{\bf{b}} \times {\bf{c}}}}$$ ### Fourier Analysis of Periodic Potential The periodic potential of a lattice is given by: \(U({\rm{r}}) = \sum\limits\_k {{U\_k}} \exp (i2\pi {\rm{K}}.{\rm{r}})\), where *Uk* is the coefficient of the potential, and **r** is a real position vector However only values of **K** are allowed which are reciprocal lattice vectors (**S**). **Proof:** \(U({\rm{r}}) = \sum\limits\_S {{U\_S}} \exp (i2\pi {\rm{S}}{\rm{.r}})\) since *U*(**r)** = *U*(**r** + **R**), where **R** is a lattice vector, \(\sum\limits\_S {{U\_S}} \exp (i2\pi {\rm{S}}{\rm{.r}}) = \sum\limits\_S {{U\_S}} \exp (i2\pi {\rm{S}}.({\rm{R}} + {\rm{r}}))\) \(\sum\limits\_S {{U\_S}} = \sum\limits\_S {{U\_S}} \exp (i2\pi {\rm{S}}.{\rm{R}})\) λ = exp( i 2 π***S R*** ) ***S R** = n*, where *n* is an integer. Only possible values are of the form: ***G** = h**a**\* + k**b**\* + l**c**\** as ***GR** = h + k + l* and h, k, l are integers. Note: This is strictly the crystallographers definition of reciprocal lattice vectors. In solid-state physics, the 2π factor is included as a scalar within **S.** The 2π factor may be omitted depending on the application. Ewald sphere Consider a circle of radius r, with points B and C lying on the circumference. ![](images/Ewald-Sphere.gif) By using simple trigonometry, \[\sin \theta = \frac{OB}{{2r}}\] If this geometry is constructed in reciprocal space, then it has some important implications. The radius can be set to 1/λ, where λ is the experimental wavelength. If O is the (0 0 0) reciprocal lattice point, and B is a general point (h k l), then the distance OB is 1/dhkl = Shkl The reciprocal vector between the points, S, increases in magnitude with increasing 2θ, Hence \(\sin \theta \) = \(\frac{{\frac{1}{{{d\_{hkl}}}}}}{{\frac{2}{\lambda }}}\) i.e. λ = 2 dhkl sin θ The Ewald circle represents in reciprocal space all the possible points where planes (reflections) could satisfy the Bragg equation. In 3-dimensions an Ewald Sphere represents all the possibilities. For simplicity, it is often drawn as a circle in two dimensions. The animation below shows how the Ewald circle is used in diffraction to determine the angles at which the Bragg equation is satisfied. Now that you have seen the construction of the Ewald Sphere try finding some points yourself in this simulation: In order to ‘measure a diffraction spot one has to position the detector (film or electronic) on the reciprocal lattice point. Applications of reciprocal space ### Ewald sphere and Reciprocal space for single crystal and oriented samples Reciprocal space and the Ewald sphere have important implications for x-ray diffraction. Experiments are set up in real space. Some topics can be considered in either real or reciprocal space whilst others are simpler, or even only really work, in reciprocal space. Ways to measure more reflections from a single crystal have been described in real space in the ‘X-ray diffraction TLP but are simpler to see in reciprocal space with the Ewald sphere construction. To observe more reflections one can: 1. Rotate the reciprocal lattice (i.e. the sample) relative to the incoming x-ray beam. 2. Use a spread of wavelengths, i.e. ‘white radiation, to give the Ewald sphere a substantial thickness instead of a thin surface. This is described in . An interesting application is on-line assessment of orientation in single crystals such as turbine blades. 3. Spread a reciprocal spot into a ring as used in . ### Reciprocal Space Maps This application for measuring the lattice parameters of a film as compared with those of the underlying substrate is shown in the following animation. Here it is much simpler to interpret the data in reciprocal space. ### Systematic absences explained by the reciprocal lattice For non-primitive lattices, systematic absences can occur in the reciprocal lattice and in the diffraction patterns. This is due to the construction of the lattices. Shown below is an example of how a larger unit cell is used instead of the primitive one. Because the shortest plane spacing along the lattice vectors becomes the longest repeating length along the reciprocal lattice vector direction, the magnitude of the reciprocal lattice vector is equal to its reciprocal length. When the new reciprocal lattice is labelled with respect to the new reciprocal lattice vectors, the dashed spots are "absent". These systematic absences are used to determine the lattice type, e.g. primitive, body or face-centred. ### Indexing Assigning indices to the diffraction spots and working out the unit cell is done in reciprocal space. This is described in the TLP. ### Brillouin Zones , an important tool in solid state physics, are also worked in reciprocal space. Summary = Following completion of this TLP you should understand what a reciprocal lattice is, and how it is constructed. You should understand that the Ewald sphere is a geometric tool to find reflections and angles that satisfy the Bragg equation. Thus it is used to set up diffraction experiments and interpret the data. The package has also shown how reciprocal vectors are important in understanding periodic structures and diffraction, which have a reciprocal nature. Questions = ### Quick questions*You should be able to answer these questions without too much difficulty after studying this TLP. If not, then you should go through it again!*1. What does a point in reciprocal space correspond to in real space? | | | | | - | - | - | | | a | A lattice point | | | b | A plane | | | c | A lattice vector | | | d | A unit cell |### Open-ended questions*The following questions are not provided with answers, but intended to provide food for thought and points for further discussion with other students and teachers.*2. Use this Ewald Sphere to find the angle at which a (2 0 0) plane will satisfy the Bragg condition for this lattice and wavelength. (Assume that it is a primitive lattice):### Deeper questions*The following questions require some thought and reaching the answer may require you to think beyond the contents of this TLP.*3. What is the reciprocal lattice of a reciprocal lattice? 4. What is the reciprocal Bravais lattice of an FCC lattice? 5. What are the reciprocal lattice vectors of a lattice with ![](images/q5a.gif), ![](images/q5b.gif) and ![](images/q5c.gif)? 6. On the (105) maps below what are the in- and out-of-plane reciprocal space coordinates for the films? Try to convert them into real space cell parameters. (As shown previously c is out-of-plane and both films have hexagonal unit cells).| | | | - | - | | AlxGa(1-x)N | InxGa(1-x)N | | | | Going further = ### Books Reciprocal space is described in most Crystallography books. For example: * C. Hammond, *The Basics of Crystallography and Diffraction*, 2nd edition, OUP, 2001 * B. D. Cullity and S. R. Stock, *Elements of X-ray Diffraction*, 3rd edition, Prentice Hall, 2001 * Eds F H Chung and D K Smith *Industrial Applications of X-ray Diffraction* 2000 has a chapter on history of Rolls Royce on-line Laue system ### Websites * uses reciprocal space to relate the diffraction pattern to real space. * , an important tool in solid state physics, are also worked in reciprocal space. ###
Aims The aims of this package are as follows: * To gain an understanding of why recycling is used and useful in todays economy. * To explore the huge diversity of ways in which materials science contributes to the process of recycling, relating key concepts to applications within this area. * To engage and promote an interest in the progress and the potential for expansion in this vital area. Before you start * Ideally you should be familiar with the scientific concepts of: + Phase diagrams - see the + Pourbaix diagrams - see the or the + Ellingham diagrams - see the + Electrochemistry - see the for an introduction + Materials Thermodynamics and Kinetics * It is possible to learn from this package without this knowledge, although it is primarily aimed at linking fundamental materials science concepts with the recycling process. Introduction Metals have been used for thousands of years. Until the industrial revolution most metal products were recycled, because they were scarce. During the Industrial Revolution recycling was not always a high priority in the shadow of development, as there was a seemingly unending supply of ore and fuel for processing. ![metal rings](figures/rings_sml.jpg) In todays world, the emphasis is shifting away from energy intensive development. This is not because scarcity is once again an issue and ores are running out, but because the energy requirements to extract and process ores into the refined state needed for the high tech industry are ever increasing. There is a drive to reduce emissions from burning hydrocarbons for energy and a decreasing oil supply. This means that recycling is again an economically and environmentally feasible option, since in most cases, the energy required for recycling of metals is much less than the energy required to refine them from ores. Recycling improves the sustainability of metal product systems, by separating resource consumption from economic growth. It is important to note when looking at recycling statistics that the definition of “% recycled” may differ. Both p.small { line-height: 40%; } p.smallOver { line-height: 40%; text-decoration: overline }         Amount recycled Amount available for recyclingand  Amount recycled metal Amount metal produced could be labelled as “% recycled”, although they may have very different values. Ambiguous statistics like this illustrate how background knowledge of the science involved can be useful when assessing the subject of recycling.   What metals can be recycled? <! .style1 {color: #0000FF} >In short, almost all metals can. For example in the U.S., of the 132 million tonnes of metal ‘apparent supply, recycling contributed 67 million tonnes.  Thats equivalent to about 50.8% . In the UK, iron and steel make up the majority of the recycled metal in use. It is supplied mainly from industry and increasingly from municipal and household waste. Common examples include aluminium and tin/steel cans, and cars. ![UK metals recycling statistics](figures/recycling_stats_sml.png) UK metals recycling statistics . However, the process of metal recycling is not as simple as ‘melt it, materials science knowledge is needed. There are two viewpoints to the recycling process, the recycling of individual metals, and the recycling of whole products. #### Is recycling economically feasible? Recycling is a great idea, in theory.  The sad fact is that unless there are clear economic gains from recycling metal, large-scale initiatives are unlikely to become popular. To be economically viable, the energy saved by recycling needs to be significantly larger than the energy needed to produce the metals from ores.There are statistics quoted for the amount of energy saved by recycling, for example these from the British Metals Recycling Association : | | | | - | - | | Metal | Energy Saving (%) | | Steel | 62 - 74 | | Copper | 87 | | Zinc | 63 | | Lead | 60 |   But, where do these numbers come from? It is the job of the materials scientist to come up with values like the ones above, requiring calculation as precisely as possible using fundamental background knowledge. Processing before recycling = Metals are used in a wide variety of applications. They will therefore be in a wide variety of states when they are sent for recycling.Sorting and processing of metal scrap is essential, because when melted, mixtures of metals may become alloys. Without careful separation the quality of the final product will be reduced. This issue is explored and explained later in this TLP. In this section, four examples of sorting and processing are investigated:– 1. the (electromagnetic induction) 2. the (electrochemistry) 3. the theoretical (ductile-brittle transition) 4. the (Ellingham diagram) Phase diagrams are also used to illustrate the problems that can occur when not all contaminants are removed.   Physical sorting – The Eddy current separation method = The most obvious example of sorting is that of using magnets to attract scrap. Magnetism occurs in iron due to unpaired electrons in the *d*-orbital giving each iron atom a magnetic moment. All these moments are aligned due to the interaction of the *d*-orbitals, giving an overall magnetic orientation. Magnetic materials can therefore be separated easily. A large number of materials are not magnetic - aluminium, for example. They still need to be separated before recycling. The eddy current separation method usually sorts this scrap. Eddy current separation takes the principles of electromagnetic induction in conducting materials, to separate non-ferrous metals by their different electric conductivities. The main principle is that ‘*an electrical charge is induced into a conductor by changes in magnetic flux cutting through it*. Moving permanent magnets passing a conductor generates the change in magnetic flux. Electromagnetic induction and Eddy current generation will not be explored further here (although there are links in the section if you wish to find out more about this subject). Faradays law (electromagnetic) describes the generation of swirling currents in conductors, such as the non-ferrous metals in this example. Swirling currents create a magnetic field in accordance with Lenzs law that will act to oppose the change in magnetic field being applied. The basic set up is to have the non-ferrous scrap on a conveyor belt. The conveyor passes a rotating drum, inside of which is a much faster rotating magnet block (up to 4000 rpm). The magnet block causes the changing magnetic flux. Try the interactive demonstration below! When the conducting particles move through this changing flux on the conveyor, a spiralling current and resulting magnetic field are induced. This magnetic field of the metal particles interacts with the magnetic field of the rotating drum.  The interaction gives the particles kinetic energy. The scrap particles are thrown off the end of the conveyor with varying energies, causing different trajectories depending on the conductivity of the particle. The size of the particle and the direction of rotation of the drum can be changed to vary the degree of separation. Small particles (10–50mm) can be separated owing to the degree of . The most conductive materials interact the most with the magnetic field and have the longest trajectories. Aluminium has the highest conductivity for a given weight at ambient temperature than any other element. Non-metallic elements such as plastic labels and paint do not interact with the magnetic field at all. They simply fall off the end of the conveyor belt with no change in energy. The eddy current separator is another excellent example of how knowledge of materials properties (electrical conductivity and ) has improved recycling technology. However, further processing is still needed to remove coatings and some alloys before re-melting, for example the .   Tin can processing =<! .style1 {color: #0000FF} .style2 { font-family: "Times New Roman", Times, serif; font-style: italic; } <!.supsub {position: absolute} > <!.subscript { display:block; position:relative; left:1px; top: -3px} > <!.superscript { display:block; position:relative; left:1px; top: -3px}> >#### Leaching and electrolysis of tin from steel cans How do you remove the tin-plating from tin cans without dissolving the steel underneath? Solving this problem requires knowledge of the way in which metals corrode – an important concept in materials science. The process is a little different today than explained here because the tin coating has been thinned (those who remember may have noticed the decreasing weight of ‘tin cans over the years). The original method to remove tin from cans was stripping by electrolysis. Knowledge of electrochemistry is critical in the designing of this process. The tin plating must be removed without dissolving the steel underneath. ![](figures/tin_can_sml.png)   ![](../images/divider400.jpg) #### The reactions used plot potential vs. pH. They describe the thermodynamic stability of metals that have oxidised components, as a function of the pH of the aqueous solution. Using these diagrams the most commonly used method for detinning steel has been designed. ![](figures/sn_fe_pourbaix_sml.png) The Pourbaix diagrams for iron (black) and tin (blue) superimposed. At a high pH (>12) – the purple region in the diagram - Sn can be stabilised in alkali solution as HSnO2– and as SnO32– in the presence of an oxidant. In this region of the diagram, iron is passivated and does not corrode. The oxidant helps this passivation to occur. In this way the tin can be removed from the surface of the can, while the steel is not harmed. The electrochemical oxidation of the tin can be expressed as:Sn(s) + 2H2O(l) = HSnO2– (aq) + 3H+ + 3e- (reaction 1) Using the Nernst equation: \[{E\_{{\rm{rev}}}} = {E^0} + \frac{{RT}}{{6F}}\ln \alpha \] We can write the electrode potential of this reaction as:For a pH of 12 and [ HSnO2– ] of 10-2 mol, **E** **= –0.79V**. By convention this is expressed as the reduction potential, i.e. for the reverse of reaction (1). For the tin being dissolved, the potential is reverse in sign, i.e. E = +0.79V. Under oxidising conditions, HSnO2– can be oxidised further to SnO32– : HSnO2– + H2O = SnO32–(aq) + 3H+ + 2e- (reaction 2) The Nernst potential for this is calculated: \[E(V ) = 0.374 - 0.0886pH + 0.0295\log \left( {\frac{{[{\rm{SnO}}\_3^{2 - }]}}{{[{\rm{HSnO}}\_2^ - ]}}} \right)\] For pH = 12 and [ SnO32– ] = [ HSnO2– ] = 10-2 mol, we find that E(v) = –0.69V. This too is a reduction potential, i.e. for the reverse of Reaction 2. The potential for turning HSnO2– into SnO32– is +0.69V. ![](figures/detin_cell_sml.png) ![](../images/divider400.jpg) #### Setting up the process An electrochemical detinning cell is arranged with the tin plated can as the anode. If the cell is arranged so that the reactions go backwards at a cathode plate, pure tin is deposited on the cathode.Thus the overall cell reaction is Sn (anode)![](eqns/eqn_tin/Eqn.009.gif)Sn (cathode). The whole cell electrode potential is zero. The applied potential is not zero, because we need to put some energy in to overcome these energy barriers: * Ohmic losses – the electrolyte, the connecting wires and the electrodes have electrical resistance. * polarisation losses – the surfaces of the electrodes become charged and we need to supply the ions with enough energy to escape the charged layer. In the same way as the aluminium cell, the whole cell electrode potential can be written as: Ecell = Erev + ηA + ηC + (*I · R*) where I is the current, R is the resistance of the components and ηA is the potential across the charged layer at the anode (and similar for the cathode). ![](../images/divider400.jpg) #### The effect on the iron At pH 12 and under oxidising conditions, the iron is passivated. Fe3O4 and/or Fe2O3 form an adherent, non-porous layer on the surface (a passivation layer). This slows down the rate of movement of Fe ions into solution and protects the iron from being dissolved. The presence of oxidising agents such as sodium nitrite makes the passivating layer form faster and more completely. The passivation reaction producing Fe3O4 can be written as: 3Fe + 4H2O → Fe3O4 + 8H+ + 8e- with a potential E0 = -0.085 - 0.0591 pH Under oxidizing conditions, the passivating layer is Fe2O3: 2Fe3O4 + H2O → 3Fe2O3 + 2H+ + 2e- with potential E0 =-0.221 - 0.0591 pH At very high pH (>13), solutions can be corrosive to Fe, especially if they are free of oxidizing agents: Fe + H2O → HFeO2– + 3H+ + 2e- E0 = -0.2493 - 0.886 pH + 0.295 log [HFeO2-] HFeO2– dissolves in the electrolyte and the steel can corrodes away. The detinning process can be recreated in miniature in the laboratory: Copper in motors = Automobiles today contain many motorised components: windows, seats, CD drives… you name it, its motorised. These motors contain lots of valuable copper. To take each motor apart and extract the components is not economically viable, taking into account the energy already expended in dismantling the automobile itself. However, materials science could provide an answer:- At low temperatures, materials become more . This is because the movement of that enable is reduced at cold temperatures. There is a difference in the crystal structures of copper and the other components of the motor (steel and polymers). Copper has a face-centred-cubic ( ) structure (also known as cubic close packed), and steel has a body-centred-cubic ( ) structure. The fcc crystal structure has more slip systems on which dislocations can move than the bcc crystal structure. Copper can still show behaviour at temperatures as low as -150°C – as shown on the graph by the high impact energy absorbed relative to other materials. ![](figures/brittle_ductile_transition_sml.png) Impact energy as a function of temperature for several materials, steel, nylon and copper being motor components, showing how copper is still ductile at –150°C, but steel and nylon are not. Separation of the copper from the rest of the components of the motor is thus possible by cooling it to -150°C and then crushing it. Screening will remove the much finer steel and plastic dusts which have formed as a result of their being much more brittle at this temperature than the copper. It should be noted that the costs of cooling components to this temperature and then separating them do not appear to be economically feasible, as well as the fact that the steel and copper may amalgamate together and sophisticated (and therefore expensive) screening techniques would have to be employed to separate them. However, in the recycling of Japanese home appliances, this technique is sometimes used – together with using the various ductile-brittle temperatures for non-metallic components to further aid in their pre-recycling separation. This can be illustrated with a demonstration: Recycling processes and issues <! function MM\_openBrWindow(theURL,winName,features) { //v2.0 window.open(theURL,winName,features); } //>After it has been sorted, metal is melted in a furnace that can be of two types. The standard Basic Oxygen Furnace (BOF) and the Electric Arc Furnace (EAF). The latter is the most widely used for recycling. The image below shows the electrodes and roof of a 10 tonne electric arc furnace. ![Electric arc furnace](figures/eaf.jpg) An electric arc furnace (EAF), image contributed by Haiko Hebig . Steel is the most recycled metal, with 400 million tonnes per year being recycled. Most EAF based plants, called mini-mills, refine 50–250 kilotonnes of scrap per year. Some new EAF plants have the capacity to produce up to 1 million tonnes per year . The electric arc furnace method is explained in detail . When scrap is recycled, it will contain impurities that have to be removed by blowing oxygen over the molten scrap. Refining is an important step in the EAF steelmaking. Depending upon the specification of the steel made, it is important to remove impurities and alloying elements. EAF is a versatile process and can readily be operated under oxidising or reducing conditions, unlike BOF which is always operated under highly oxidising conditions.   Contaminants in aluminium alloys Aluminium is the most widely used aerospace metal. As calculated , it is highly energy intensive to produce, and recycling it is both economically and environmentally beneficial. It is usually separated from scrap by the Eddy current separation method, . However, by chance very small amounts of ferrous metals and other contaminants will remain in the scrap when it finally reaches the melting stage. This may not seem to be much of an issue – Aluminium is never used in the pure form. It is always alloyed with other metals and elements to give the exact mechanical and chemical properties for the desired application, and iron is almost always <1% in these alloys. Why would having a few percent of e.g. ‘unplanned iron mixed in the aluminium melt be of consequence? Look at the phase diagram below: ![](figures/AlFe_phase_diagram_sml.png)Al-Fe phase diagram. If iron were a contaminant in aluminium melt, it would be of a small percentage. The right hand of the diagram shows the behaviour of the system at this composition. When molten, iron is completely soluble in aluminium. What do you think will happen when the ingots are cooled in terms of the microstructure? Although the microstructure of the ingots themselves is not of particular concern, the resulting behaviour of the Aluminium in its application is. The presence of the inter-metallic compounds (such as Al13Fe4) may reduce the ductility and machineability – since the presence of precipitates will interfere with dislocation motion ( ) and reduce ductility. Aluminium alloys are also heat treated to optimize their properties. Obviously, the behaviour of an aluminium alloy when , for example, cannot be predicted if its composition is not precisely known. Even though recycling aluminium is highly attractive from an economic point of view, for some high-tech applications the aluminium used has to come from primary production. Development of methods to further refine the recycled aluminium at low energy costs (to keep recycling the lower energy process compared to production) are needed if a larger amount of aluminium used in aerospace applications is to come from recycled sources.   The Ellingham diagram in removal of contaminants <! .style1 { font-family: "Times New Roman", Times, serif; font-style: italic; } >The recycling of metals is a metallurgical process, and as such can be described by the rules of thermodynamics. Earlier on this was illustrated by the energy considerations of Aluminium production. On the most basic level, if the of the reactants in a chemical reaction is different to that of the products, a reaction will occur. The reaction will stop when the free energies of the products and the reactants are equal. Contaminants in recycling molten metal ‘solutions are removed using the principles of oxidation and reduction, which can be described graphically on the Ellingham Diagram. The Ellingham diagram shows the changes in standard free energy that occur in various reactions. In this TLP the focus is on oxidation reactions. The free energy change for oxidation reactions can be given by: ΔG = ΔG0 − RT lnE where K is the equilibrium constant, calculated from:activity (metal oxide) / activity (pO2)x where x is the number of moles of O2 in the reaction (i.e. if the reaction is Metal + 1/2O2 → MetalO, then this index = 1/2) At equilibrium, Δ*G* = 0, so if ΔG = ΔG + *RT* lnG ΔG0 = − RT ln*K*. Plotting T against ΔG0 gives the Ellingham diagram. The oxidation Ellingham diagram is used to find the partial pressures of oxygen needed to oxidise elements at a given temperature, or reduce the metal oxide. The vertical difference between ΔG  values of two lines at a specific temperature gives ΔG values used in redox reactions like the energy for primary aluminium production, explored further up in this TLP. In the EAF, under oxidising conditions (bubbling oxygen gas through the molten metal) elements such as aluminium, silicon, manganese and chromium can be oxidised to the slag. The Ellingham diagram is used to determine the oxygen partial pressure being bubbled through the molten metal: ![](figures/Ellingham_sml.png) 1. Identify a point corresponding to a selected temperature on the line for: Fe + O → FeO, above **M**. 2. Using this point, and the point O in the top left corner, draw a line across the diagram. 3. Read the partial pressure of O2 from the right hand axis. At any oxygen pressure higher than ~10-8.5, the iron will be oxidised at the temperature of 1600°C. It can be seen from the Ellingham diagram that the equilibrium lines of Aluminium, Silicon, Manganese and Chromium equilibrium lines are at a lower free energy position than the 2Fe + O2 → 2FeO line, and so at any partial pressure of oxygen all of these elements will be oxidised into the slag. However, Sulphur and Phosphorus cannot be removed from the steel melt by simple oxidation, other methods have to be employed. oxidation is possible under very basic conditions by reducing the activity of the oxidised P2O5 in the slag, therefore altering very significantly the value of. can be removed also in a basic slag but under a reducing condition, by the addition of lime ( ): FeS + CaO → CaS + FeO For making alloy steels, it is also possible to preserve alloying elements such as Ni or Mo by using a reducing slag in the EAF. Copper and Tin are examples of elements that cannot be refined out of steel in the EAF (or the BOF) illustrating the importance of the de-tinning process and the removal of copper from motors explored earlier in the TLP. These elements render steel very difficult to process due to hot shortness during hot rolling. It is important to control these elements at very low levels in steel. As they cannot be refined in the BOF or EAF, they will accumulate in steel and increase in successive generations. If we do not find a suitable large-scale method for removing these elements, the success of steel recycling in the future will be seriously limited. This issue is explored quantitatively in the Deeper Questions section of the TLP.   Automobile recycling A good way to consolidate the information you have already learned in this TLP is look at how recycling processes are used in a real application, for example the recycling of automobiles. Cars are usually designed with a specific lifespan – around 10 years. This lifespan is steadily decreasing over time because of the increasing speed of development in automobile manufacture. The recycling of automobiles is a partial success story – since on average today 75% by weight of a car is recycled. Still, the amount recycled will have to increase to meet government targets. In the UK, from 2015, a target of recycling 95% by weight of a car was set. Only 10% of this weight can be fuel. However, 75% by weight does not mean 75% by volume. A large amount of the volume of a car is in the form of plastics and other non-metallic elements such as glass. The recycled volume is almost entirely ferrous metal – virtually none of the non-metallic elements in cars are recycled. This is an issue that will have to be addressed if the new targets are to be met. ![](figures/auto_pie_chart_sml.png) Source: ACORD annual report, 2001 [].   There are four stages in the recycling of your average car: 1. All **fluids are drained** from the car, including antifreeze coolant, oil, brake fluid, transmission fluid and washer fluid. In theory, distillation could be used to separate out these liquids and separate especially the oil and grease, which can be used again as fuel or lubricant. 2. Easily removable parts of the vehicle are taken out and **sorted** according to reyclability, whether they can be re-sold (such as the bumpers), or whether they are to be landfilled. The glass windows can be recycled, along with the tyres and a few of the polymeric components. 3. **Crushing** of the remainder of the car. 4. **Shredding** into small particles. These particles can then be sorted by the eddy current separation method explained above, to recover the ferrous and non-ferrous metals from the residual ‘shredder fluff'. Shredder fluff is currently landfilled, because of the economic costs of separating it out once it has been shredded into fist size pieces. Although landfilling consumes very small amounts of energy, in the UK there are new restrictions on the amount of material that is landfilled. Reasons for this are along the lines of chemicals leaching into the water supply. Methods are continually being developed to process this residual material so that the new ELV directive can be met (95% of cars recycled from 2015) involving large amounts of research and development in the recycling of polymeric materials, again illustrating the important role materials science plays in improving the sustainability of our planet. Materials developed from biological materials that are sustainable and can be recycled are coming into use, for example, in Fords Model-U car. ![](figures/ford_u.jpg) Ford's Model-U car (Image provided by John Nens, Ford Motor Company). The Model U is helping encourage development of materials that are safe to produce, use and recycle over and over again in a cradle-to-cradle cycle. These materials never become waste, but instead are nutrients that either feed healthy soil or the manufacturing processes without moving down the value chain.   Summary = Recycling is currently a ‘hot topic in the political sense, and a challenging one scientifically. As the Earths resources become more energy-intensive to extract, recycling will move towards the forefront of scientific research and development. It would seem common sense that recycling is the way forward for development. Unfortunately, until the economics of recycling are more beneficial than the economics of primary production, it will not become the primary source of metal. Increased research into methods for recycling that minimise energy expenditure – and therefore increase the economic appeal of recycling - requires the expertise of materials scientists and metallurgists. In this TLP you have: * Learned how statistics quoted about recycling need to be critically analysed before making any conclusions. * Used thermodynamic and electrochemical principles from materials science to actually calculate the energy saved by recycling Aluminium. * Explored the varied methods by which metals are sorted prior to recycling from a scientific viewpoint, recognising the key role materials science plays in developing recycling technology. * Looked at the reasons behind separating metals before remelting, using materials science concepts to explain why this is necessary. * Learned a little about the recycling process of automobiles – acknowledging that recycling of metals in this scenario has been a success. It was the aim of this TLP to provide a taster for the huge range of different branches of materials science that contribute to metals recycling technology. Questions = ### Quick questions*You should be able to answer these questions without too much difficulty after studying this TLP. If not, then you should go through it again!*1. Ferrous scrap metal is easily separated from non-ferrous, using an electromagnet. However, non-ferrous metals still need to be separated before recycling. What materials properties allow the separation of non-ferrous metals in the eddy current separator? | | | | | - | - | - | | | a | Mass and Density | | | b | Conductivity and Mass | | | c | Conductivity and Density | | | d | Conductivity | 2. In the eddy current separator a rotating magnet block causes a changing magnetic flux. The metal is moving through the changing flux and swirling currents are formed as a result. What effect do these swirling currents have? 3. Tin is separated from steel cans using principles involving corrosion. What method is used to decide the conditions of corrosion such that the steel will not dissolve into solution as well? 4. How is the tin removed from the solution? | | | | | - | - | - | | | a | Precipitation and filtering | | | b | Electrolysis | | | c | The tin is not removed from the solution | | | d | Fractional distillation | 5. What property of copper could be exploited in order to separate it from steel and other plastics before recycling? | | | | | - | - | - | | | a | Copper is more ductile than the other materials named | | | b | Copper has a lower melting temperature than the other materials named | | | c | Copper has a lower ductile-brittle transition than the other materials named | | | d | Copper is shinier than the other metals named |### Deeper questions*The following questions require some thought and reaching the answer may require you to think beyond the contents of this TLP.*6. If the calorific value of coal is 30.9 MJ kg-1 and it is combusted at 40% efficiency in a power station, how many kilograms of coal would be needed to produce 1 kg of primary production aluminium and 1 kg of recycled aluminium respectively? Hence, or otherwise, find a percentage value for the energy saved by recycling aluminium. 7. Today, an increasing amount of metal is being galvanised. In automotives and home appliances the advantage of Galvanised Steel is that even if the Zn layer is scratched, the steel will not rust. This comes from the fact that Zn is much more reactive with O2 than Steel (Fe) and so reacts with the oxygen preferentially. However, in 5 or 6 years when these galvanised items reach the end of their lives, a huge amount of zinc-plated scrap will need to be recycled.Zn has a boiling point of 907°C.  This is a lower temperature than the melting point of steel in the EAF, so if Zn is present in the melt, it boils off and is separated.  This forms a vapour. Carbon dioxide gas is also present in these exhaust fumes, and since Zn is highly reactive with oxygen, it displaces the carbon to form ZnO particles. These are in the form of an extremely fine dust that is a problem for workers at EAF plants. It can be collected, although it has to be further refined to become useful Zn (expending more energy).Since Zn is most commonly electroplated to steel, it could be removed in the same way as tin is removed from steel, and this process is currently being researched.  Find a pH and an *E°* value that would allow Zn to be corroded and Fe to be passivated?![](figures/iron_zinc_pourbaix_sml.png) 8. An exemplar energy calculation for the is shown in the TLP. Magnesium is another metal that has to be extracted via electrolysis rather than direct reduction with carbon, since it is more ‘reactive than carbon.MgCl2(l) → Mg(l) + Cl2(g)![](figures/mg_electrolysis_sml.png)By using the same method as that used in the TLP for aluminium, calculate the energy required to produce 1 kg of magnesium. 9. In the electric arc furnace, conditions are created such that impurities resulting from poorly sorted steel scrap are reduced/oxidised into slag. It was explained in the TLP how the can be used to predict which metals will come out of solution with the steel and which ones will not. Phosphorus (P), Copper and Tin cannot come out of solution this way, and other methods have to be employed (as described) to remove them from solution.It was explained that P is removed from the EAF by addition of CaO to produce a very basic slag.**Why does this occur?**Phosphorus can oxidise to P2O5 as shown:![](eqns/eqn_q9/Eqn.001.gif)The driving force for P oxidation is lower than that for the oxidation of Fe. Thermodynamically, as depicted in the Ellingham diagram, oxidation of P from Fe melt is not favourable. As the concentration of P in the melt is low, the activity of P in the Fe melt is in the order of magnitude of 10-3, further decreasing the driving force for oxidising and thus removing P from the melt.It will be possible to remove P from the Fe melt by an oxidation reaction, provided the driving force is greatly increased by achieving a very low activity for P2O5 in the slag in equilibrium with Fe such that P2O5 is transferred from the melt to the slag. For this to happen in the slag, the ΔG value for the oxidation of Fe to FeO needs to equal the ΔG value for P to P2O5.If we assume that Fe is pure (which is not unreasonable) and the slag is saturated with FeO, ΔG of the reaction:![](eqns/eqn_q9/Eqn.002.gif)becomes equal to ΔG°, because the activity term ![](eqns/eqn_q9/Eqn.003.gif).\*Note that in practice an oxidising slag actually contains 20 mol% FeO (at which the activity of FeO is ~0.2).For P to be oxidised into the slag spontaneously, the reaction![](eqns/eqn_q9/Eqn.004.gif)needs to be at least equal the value of ΔG° from the iron reaction.Since the value of ΔG° for the oxidation of P is known – it can be found from the Ellingham diagram at the temperature within the EAF, one can calculate the required activity of the P2O5 to allow it to move into the slag in the EAF. **Find this value.** Going further =<! function MM\_openBrWindow(theURL,winName,features) { //v2.0 window.open(theURL,winName,features); } //>### References> > [1] USGS Minerals Yearbook, 2003. > > > > [2] British Metals Recycling Association. > > > > [3] Part III Materials Science Lecture notes © Dr R.V. Kumar, University of Cambridge. > > > [4] WasteOnline > website (no longer online). > > > > [5] Part IA Materials Science Lecture notes © Dr J.L. Driscoll, University of Cambridge. > > > [6] Personal of Haiko > Hebig. > > > > [7] Atlas of Electrochemical Equilibria in Aqueous Solutions, Pourbaix. > > >   > > > ### Websites * An excellent website on aluminium can recycling, with information on starting recycling initiatives in your area: * More information on electromagnetic induction * More information on eddy current generation
Aims On completion of this TLP you should: * Be familiar with the concept of energy bands; * Understand about the two charge carriers, electrons and holes, in semiconductors; * Understand why doping of semiconductors is useful; * Be familiar with basic electronic devices made using semiconductors and how these devices operate. Before you start It is advisable before attempting this TLP to ensure familiarity with the simple theory of covalent chemical bonding. Introduction Semiconductors are amongst the most technologically important materials in existence today. With the exception of extremely simple objects such as filament light bulbs, all of the electronic devices that we use involve some semiconductor-based devices. In order to appreciate how semiconductors can be used to create devices, it is important to have an understanding of the basic electronic properties of semiconductors. The first section of this TLP will concentrate on describing what it is that makes a material a semiconductor, and how semiconductors respond to an applied electric field. The second half of the TLP gives some specific examples of semiconductor devices and where these devices are used. Introduction to Energy Bands When two valence electron atomic orbitals in a simple molecule such as hydrogen combine to form a chemical bond, two possible molecular orbitals result. One molecular orbital is lowered in energy relative to the sum of the energies of the individual electron orbitals, and is referred to as the **'bonding'** orbital. The other molecular orbital is raised in energy relative to the sum of the energies of the individual electron orbitals and is termed the **'anti-bonding'** orbital. In a solid, the same principles apply. If *N* valence electron atomic orbitals, all of the same energy, are taken and combined to form bonds, *N* possible energy levels will result. Of these, *N*/2 will be lowered in energy and *N*/2 will be raised in energy with respect to the sum of the energies of the *N* valence electron atomic orbitals. However, instead of forming *N*/2 bonding levels all of the exact same energy, the allowed energy levels will be smeared out into **energy bands**. Within these energy bands local differences between energy levels are extremely small. The energy differences between the levels within the bands are much smaller than the difference between the energy of the highest bonding level and the energy of the lowest anti-bonding level. Like molecular orbitals, and also atomic orbitals, each energy level can contain at most two electrons of opposite spin. ![Image of simple energy bands](images/Bandssosimple.jpg) The allowed energy levels are so close together that they are sometimes considered as being continuous. It is very important to bear in mind that, while this is a useful and reasonable approximation in some calculations, the bands are actually composed of a finite number of very closely spaced electron energy levels. If there is one electron from each atom associated with each of the *N* orbitals that are combined to form the bands, then because each resulting energy level can be doubly occupied, the 'bonding' band, or **valence band** will be completely filled and the 'anti-bonding' band, or **conduction band** will be empty. This is depicted schematically in the picture above by the grey shading of the valence band. Electrons cannot have any values of energy that lie outside these bands. An electron can only move ('be promoted') from the valence band to the conduction band if it is given an energy at least as great as the band gap energy. This can happen if, for example, the electron were to absorb a photon of sufficiently high energy. If, as in the above one-dimensional schematic, a band is completely filled with electrons, and the band immediately above it is empty, the material has an energy **band gap**. This band gap is the energy difference between the highest occupied state in the valence band and the lowest unoccupied state in the conduction band. The material is either a semiconductor if the band gap is relatively small, or an insulator if the band gap is relatively large. Electrons in metals are also arranged in bands, but in a metal the electron distribution is different - electrons are not localised on individual atoms or individual bonds. In a simple metal with one valence electron per atom, such as sodium, the valence band is not full, and so the highest occupied electron states lie some distance from the top of the valence band. Such materials are good electrical conductors, because there are empty energy states available just above the highest occupied states, so that electrons can easily gain energy from an applied electric field and jump into these empty energy states. The distinction between an insulator and a semiconductor is less precise. In general, a material with a band gap of less than about 3 eV is regarded as a semiconductor. A material with a band gap of greater than 3 eV will commonly be regarded as an insulator. A number of ceramics such as silicon carbide (SiC), titanium dioxide (TiO2), barium titanate (BaTiO3) and zinc oxide (ZnO) have band gaps around 3 eV and are regarded by ceramicists as semiconductors. Such ceramics are often referred to as wide-band-gap semicondutors. The distinction between semiconductors and insulators arises because in small band gap materials at room temperature a small, but appreciable, number of electrons can be excited from the filled valence bands into the unfilled conduction bands simply by thermal vibration. This leads to semiconducting materials having electrical conductivities between those of metals and those of insulators. The picture we have sketched here is only a very simple qualitative picture of the electronic structure of a semiconductor designed to capture essential aspects of the band structure in semiconductors relevant to this TLP. More precise and quantitative approaches exist - see Such quantitative approaches are generally quite complex and require an understanding of quantum mechanics. Fortunately, the very simple qualitative picture described above for semiconductors is all that we need to be able to build upon and develop in this TLP. An extension of the simple band energy diagram with only the vertical axis labelled as energy, with the horizontal axis unlabelled, is to plot the energy vertically against wave vector, *k*. From de Broglie's relationship *p* = *h**k* where *p* is momentum and *h* is Planck's constant, *h*, divided by 2π. Such plots therefore relate energy to momentum. The reason why such plots are useful lies in the more quantitative methods referred to above, from which we shall simply quote useful results. The energy of a classical, non-quantum, particle is proportional to the square of its momentum. This is also true for a free electron, as in the most simple picture possible of valence electrons in metals where the electrostatic potential from the nuclei is ignored. However, in a real crystalline solid the periodicity of the lattice and the electrostatic potential from the nuclei together mean that in the quantum world in a crystalline material the electron energy, *E*, is not simply proportional to the square of the momentum, and so is not proprtional to the square of the wave vector, *k*. In these *E*-*k* diagrams, often called **band diagrams**, plotted in what is referred to as a **reduced zone scheme**, the momentum that is plotted is actually a quantity called **crystal momentum**. The distinction between momentum and crystal momentum arises from the periodicity of the solid. Fortunately, this distinction is not important for understanding this TLP on semiconductors. There are usually many different values of electron energy possible for any given value of the electron momentum. Each possible energy value lies in one of the energy bands. When plotted against the wave vector, *k*, the bands of allowed energy are not really flat. This means that bands can overlap in energy, as the maximum value in one band may be higher then the minimum value in another band. In this case the relevant maximum and minimum will occur for different values of *k* because energy bands never cross over each other. This is one way in which metals can have partially filled energy bands. The available energy states are filled with electrons starting with those lowest in energy. Such overlapping of bands as a function of *k* does not occur in semiconductors. ![Image of overlapping energy bands](images/bandsoverlappingnolabels.jpg) The Fermi–Dirac Distribution Electrons are an example of a type of particle called a **fermion**. Other fermions include protons and neutrons. In addition to their charge and mass, electrons have another fundamental property called **spin**. A particle with spin behaves as though it has some intrinsic angular momentum. This causes each electron to have a small magnetic dipole. The spin quantum number is the projection along an arbitrary axis (usually referred to in textbooks as the *z*-axis) of the spin of a particle expressed in units of *h*. Electrons have spin ½, which can be aligned in two possible ways, usually referred to as 'spin up' or 'spin down'. All fermions have half-integer spin. A particle that has integer spin is called a **boson**. Photons, which have spin 1, are examples of bosons. A consequence of the half-integer spin of fermions is that this imposes a constraint on the behaviour of a system containing more then one fermion. This constraint is the **Pauli exclusion principle**, which states that no two fermions can have the exact same set of quantum numbers. It is for this reason that only two electrons can occupy each electron energy level – one electron can have spin up and the other can have spin down, so that they have different spin quantum numbers, even though the electrons have the same energy. These constraints on the behaviour of a system of many fermions can be treated statistically. The result is that electrons will be distributed into the available energy levels according to the Fermi Dirac Distribution: \[f\left( \varepsilon \right) = \frac{1}{{\exp \left( {\left( {E - \mu } \right)/{k\_{\rm{B}}}T} \right) + 1}}\] where *f*(ε) is the occupation probability of a state of energy ε, *k*B is Boltzmann's constant, μ (the Greek letter mu) is the chemical potential, and *T* is the temperature in Kelvin. The distribution describes the occupation probability for a quantum state of energy *E* at a temperature *T*. If the energies of the available electron states and the degeneracy of the states (the number of electron energy states that have the same energy) are both known, this distribution can be used to calculate thermodynamic properties of systems of electrons. At absolute zero the value of the chemical potential, μ, is defined as the **Fermi energy**. At room temperature the chemical potential for metals is virtually the same as the Fermi energy – typically the difference is only of the order of 0.01%. Not surprisingly, the chemical potential for metals at room temperature is often taken to be the Fermi energy. For a pure undoped semiconductor at finite temperature, the chemical potential always lies halfway between the valence band and the conduction band. However, as we shall see in a subsequent section of this TLP, the chemical potential in extrinsic (doped) semiconductors has a significant temperature dependence. In order to understand the behaviour of electrons at finite temperature qualitatively in metals and pure undoped semiconductors, it is clearly sufficient to treat μ as a constant to a first approximation. With this approximation, the Fermi-Dirac distribution can be plotted at several different temperatures. In the figure below, μ was set at 5 eV. ![Fermi-Dirac Distribution for several temperatures](images/fermiDirac.jpg) From this figure it is clear that at absolute zero the distribution is a step function. It has the value of 1 for energies below the Fermi energy, and a value of 0 for energies above. For finite temperatures the distribution gets smeared out, as some electrons begin to be thermally excited to energy levels above the chemical potential, μ. The figure shows that at room temperature the distribution function is still not very far from being a step function. Charge Carriers in Semiconductors = When an electric field is applied to a metal, negatively charged electrons are accelerated and carry the resulting current. In a semiconductor the charge is not carried exclusively by electrons. Positively charged **holes** also carry charge. These may be viewed either as vacancies in the otherwise filled valence band, or equivalently as positively charged particles. Since the Fermi-Dirac distribution is a step function at absolute zero, pure semiconductors will have all the states in the valence bands filled with electrons and will be insulators at absolute zero. This is depicted in the **E**-**k** diagram below; shaded circles represent filled momentum states and empty circles unfilled momentum states. In this diagram **k**, rather than *k*, has been used to denote that the wave vector is actually a vector, i.e., a tensor of the first rank, rather than a scalar. ![Fermi-Dirac Distibution ](images/para01.jpg) If the band gap is sufficiently small and the temperature is increased from absolute zero, some electrons may be thermally excited into the conduction band, creating an electron-hole pair. This is as a result of the smearing out of the Fermi-Dirac distribution at finite temperature. An electron may also move into the conduction band from the valence band if it absorbs a photon that corresponds to the energy difference between a filled state and an unfilled state. Any such photon must have an energy that is greater than or equal to the band gap between the valence band and the conduction band, as in the diagram below. ![Fermi-Dirac Distibution ](images/para02.jpg) Whether thermally or photonically induced, the result is an electron in the conduction band and a vacant state in the valence band. ![Fermi-Dirac Distibution ](images/para03.jpg) If an electric field is now applied to the material, all of the electrons in the solid will feel a force from the electric field. However, because no two electrons can be in the exact same quantum state, an electron cannot gain any momentum from the electric field unless there is a vacant momentum state adjacent to the state being occupied by the electron. In the above schematic, the electron in the conduction band can gain momentum from the electric field, as can an electron adjacent to the vacant state left behind in the valence band. In the diagram below, both of these electrons are shown moving to the right. ![Fermi-Dirac Distibution ](images/para04.jpg) The result of this is that the electrons have some net momentum, and so there is an overall movement of charge. This slight imbalance of positive and negative momentum can be seen in the diagram below, and it gives rise to an electric current. ![Fermi-Dirac Distibution ](images/para05.jpg) The vacant site in the valence band which has moved to the left can be viewed as being a particle which carries positive electric charge of equal magnitude to the electron charge. This is therefore a **hole**. It should be appreciated that these schematics do not represent electrons 'hopping' from site to site in real space, because the electrons are not localised to specific sites in space. These schematics are in momentum space. As such, holes should not be thought of as moving through the semiconductor like dislocations when metals are plastically deformed – it suffices to view them simply as particles which carry positive charge. The opposite process to the creation of an electron-hole pair is called **recombination**. This occurs when an electron drops down in energy from the conduction band to the valence band. Just as the creation of an electron-hole pair may be induced by a photon, recombination can produce a photon. This is the principle behind semiconductor optical devices such as light-emitting diodes (LEDs), in which the photons are light of visible wavelength. Intrinsic and Extrinsic Semiconductors In most pure semiconductors at room temperature, the population of thermally excited charge carriers is very small. Often the concentration of charge carriers may be orders of magnitude lower than for a metallic conductor. For example, the number of thermally excited electrons cm–3 in silicon (Si) at 298 K is 1.5 × 1010. In gallium arsenide (GaAs) the population is only 1.1 × 106 electrons cm–3. This may be compared with the number density of free electrons in a typical metal, which is of the order of 1028 electrons cm–3. Given these numbers of charge carriers, it is no surprise that, when they are extremely pure, silicon and other semiconductors have high electrical resistivities, and therefore low electrical conductivities. This problem can be overcome by doping a semiconducting material with impurity atoms. Even very small controlled additions of impurity atoms at the 0.0001% level can make very large differences to the conductivity of a semiconductor. It is easiest to begin with a specific example. Silicon is a group IV element, and has 4 valence electrons per atom. In pure silicon the valence band is completely filled at absolute zero. At finite temperatures the only charge carriers are the electrons in the conduction band and the holes in the valence band that arise as a result of the thermal excitation of electrons to the conduction band. These charge carriers are called *intrinsic* charge carriers, and necessarily there are equal numbers of electrons and holes. Pure silicon is therefore an example of an **intrinsic semiconductor**. If a very small number of atoms of a group V element such as phosphorus (P) are added to the silicon as substitutional atoms in the lattice, additional valence electrons are introduced into the material because each phosphorus atom has 5 valence electrons. These additional electrons are bound only weakly to their parent impurity atoms (the bonding energies are of the order of hundredths of an eV), and even at very low temperatures these electrons can be promoted into the conduction band of the semiconductor. This is often represented schematically in band diagrams by the addition of 'donor levels' just below the bottom of the conduction band, as in the schematic below. ![Band gap diagram with donor level](images/donorlevels.jpg) The presence of the dotted line in this schematic does not mean that there now exist allowed energy states within the band gap. The dotted line represents the existence of additional electrons which may be easily excited into the conduction band. Semiconductors that have been doped in this way will have a surplus of electrons, and are called ***n*-type semiconductors**. In such semiconductors, electrons are the majority carriers. Conversely, if a group III element, such as aluminium (Al), is used to substitute for some of the atoms in silicon, there will be a deficit in the number of valence electrons in the material. This introduces electron-accepting levels just above the top of the valence band, and causes more holes to be introduced into the valence band. Hence, the majority charge carriers are positive holes in this case. Semiconductors doped in this way are termed ***p*-type semiconductors**. ![Band gap diagram with acceptor level](images/acceptorlevels.jpg) Doped semiconductors (either *n*-type or *p*-type) are known as **extrinsic semiconductors**. The activation energy for electrons to be donated by or accepted to impurity states is usually so low that at room temperature the concentration of majority charge carriers is similar to the concentration of impurities. It should be remembered that in an extrinsic semiconductor there is an contribution to the total number of charge carriers from intrinsic electrons and holes, but at room temperature this contribution is often very small in comparison with the number of charge carriers introduced by the controlled impurity doping of the semiconductor. Direct and Indirect Band Gap Semiconductors = The band gap represents the minimum energy difference between the top of the valence band and the bottom of the conduction band, However, the top of the valence band and the bottom of the conduction band are not generally at the same value of the electron momentum. In a **direct band gap semiconductor**, the top of the valence band and the bottom of the conduction band occur at the same value of momentum, as in the schematic below. ![Direct band gap semiconductor](images/direct_bandgap.jpg) In an **indirect band gap semiconductor**, the maximum energy of the valence band occurs at a different value of momentum to the minimum in the conduction band energy: ![Indirect band gap semiconductor](images/indirect_bandgap.jpg) The difference between the two is most important in optical devices. As has been mentioned in the section , a photon can provide the energy to produce an electron-hole pair. Each photon of energy *E* has momentum *p = E / c*, where *c* is the velocity of light. An optical photon has an energy of the order of 10–19 J, and, since *c* = 3 × 108 ms–1, a typical photon has a very small amount of momentum. A photon of energy *Eg*, where *Eg* is the band gap energy, can produce an electron-hole pair in a direct band gap semiconductor quite easily, because the electron does not need to be given very much momentum. However, an electron must also undergo a significant change in its momentum for a photon of energy *Eg* to produce an electron-hole pair in an indirect band gap semiconductor. This is possible, but it requires such an electron to interact not only with the photon to gain energy, but also with a lattice vibration called a phonon in order to either gain or lose momentum. The indirect process proceeds at a much slower rate, as it requires three entities to intersect in order to proceed: an electron, a photon and a phonon. This is analogous to chemical reactions, where, in a particular reaction step, a reaction between two molecules will proceed at a much greater rate than a process which involves three molecules. The same principle applies to recombination of electrons and holes to produce photons. The recombination process is much more efficient for a direct band gap semiconductor than for an indirect band gap semiconductor, where the process must be mediated by a phonon. As a result of such considerations, gallium arsenide and other direct band gap semiconductors are used to make optical devices such as LEDs and semiconductor lasers, whereas silicon, which is an indirect band gap semiconductor, is not. The table in the next section lists a number of different semiconducting compounds and their band gaps, and it also specifies whether their band gaps are direct or indirect. Compound Semiconductors = In addition to group IV elements, compounds of group III and group V elements, and also compounds of group II and group VI elements are often semiconductors. The common feature to all of these is that they have an average of 4 valence electrons per atom. One example of a compound semiconductor is gallium arsenide, GaAs. In a compound semiconductor like GaAs, doping can be accomplished by slightly varying the stoichiometry, i.e., the ratio of Ga atoms to As atoms. A slight increase in the proportion of As produces *n*-type doping, and a slight increase in the proportion of Ga produces *p*-type doping. The table below list some semiconducting elements and compounds together with their bandgaps at 300 K. | | | | | | - | - | - | - | | | **Material** | **Direct / Indirect Bandgap** | **Band Gap Energy at 300 K (eV)** | Elements | C (diamond) Ge Si Sn (grey) | Indirect Indirect Indirect Direct | 5.47 0.66 1.12 0.08 || Groups III-V compounds | GaAs InAs InSb GaP GaN InN | Direct Direct Direct Indirect Direct Direct | 1.42 0.36 0.17 2.26 3.36 0.70 | | Groups IV-IV compounds | α-SiC | Indirect | 2.99 | | Groups II-VI compounds | ZnO CdSe ZnS | Direct Direct Direct | 3.35 1.70 3.68 | Data from R.E. Hummel, Electronic Properties of Materials, 3rd edition, Appendix 4, p. 413. Behaviour of the Chemical Potential = The Fermi-Dirac distribution was introduced in the section . The relevant equation to describe the distribution is \[f\left( \varepsilon \right) = \frac{1}{{\exp \left( {\left( {E - \mu } \right)/{k\_{\rm{B}}}T} \right) + 1}}\] so that for a chemical potential, μ, of 5 eV, the distribution takes the form ![Fermi-Dirac distribution](images/fermiDirac.jpg) as a function of temperature. One feature that is very important about the Fermi-Dirac distribution is that it is symmetric about the chemical potential. Hence for a simple intrinsic semiconductor, which has equal numbers of electrons in the conduction band and holes in the valence band, and where the density of states is also symmetric about the centre of the band gap, the chemical potential **must** lie halfway between the valence band and the conductance band, regardless of the temperature, because each electron promoted to the conduction band leaves a hole in the valence band. This is shown in the band diagram below in which energy is plotted vertically against temperature horizontally. ![Temperature dependence of an intrinsic semiconductor](images/chemical_potential_intrinsic.jpg) [Note that if the density of states is not exactly symmetric about the centre of the band gap, then the chemical potential does not have to be exactly in the centre of the band gap. However, under such circumstances, it will still be extremely close to the centre of the band gap whatever the temperature, and for all practical purposes can be considered to be in the centre of the band gap.] For an extrinsic semiconductor the situation is slightly more complicated. At absolute zero in an *n*-type semiconductor, the chemical potential must lie in the centre of the gap between the donor level and the bottom of the conduction band. At low temperatures in such a semiconductor there are more conduction electrons than there are holes. If the donor level is more than half full, the chemical potential must lie somewhere between the donor levels and the conduction band. At higher temperatures, when the donor level is completely depleted of electrons, and the contribution from intrinsic electrons to the overall electrical conductivity becomes more substantial, the chemical potential tends towards that for an intrinsic semiconductor, i.e., halfway between the conduction and valence bands, and therefore in the middle of the band gap. ![Temperature dependence of a n-type semiconductor](images/chemical_potential_ntype.jpg) For *p*-type semiconductors the behaviour is similar, but the other way around, i.e., the chemical potential starts midway between the valence band and the acceptor levels at absolute zeo and gradually increases in energy as the temperature increases, so that at high temperatures it too is in the middle of the band gap. ![Temperature dependence of a p-type semiconductor](images/chemical_potential_ptype.jpg) Metal–Semiconductor Junction – Rectifying Contact = <! function openwindow(theURL,winName,features) { //v2.0 window.open(theURL,winName,features); } //> Before discussing the behaviour of a metal-semiconductor boundary, it is first necessary to introduce the concept of the **work function**. The work function of a material is the energy required to remove an electron from the level of the chemical potential and give it enough energy to escape to infinity and arrive there with zero energy. Albert Einstein first proposed the concept of the work function in his work on the in metals. It was for this work, rather than for his work on relativity, that Einstein was awarded the Nobel prize in 1921. When a metal and a semiconductor are joined, two possible types of contact can result, depending on the combination of metal and semiconductor used. The contact may be **rectifying**, which only allows current to pass in one direction. Alternatively, it could be **ohmic**, in which case current can pass in either direction. Here we will discuss the rectifying contact, sometimes called the **Schottky barrier contact**. The schematic below shows a metal and an *n*-type semiconductor. The work functions of the metal and semiconductor have been labelled ΦM and ΦS respectively. The dashed line at the top represents the zero of energy at infinity. The chemical potential of the semiconductor is represented by the dashed line labelled μ. For the metal, the chemical potential is taken to be at the Fermi energy (also known as the Fermi level), labelled *E*F – at low temperatures the difference between the chemical potential and the Fermi energy can be neglected (see the section in this TLP entitled ). The energy difference labelled as χ between the bottom of the conduction band and infinity is called the **electron affinity** of the semiconductor. It should be noted that the above schematic is different to the illustrations showing energy bands plotted against momentum. Any horizontal change in a diagram such as that shown above represents a variation in the energy bands with position in real space. In the metal the electron affinity is the energy which would be released if an electron were to fall in from infinity to the highest unoccupied energy level of the material. Hence, for a metal, the electron affinity is identical to the chemical potential or Fermi energy. This is also why the electron affinity for a semiconductor is the energy difference between the bottom of the conduction band and infinity. The schematic above is an interactive animation. By clicking the button in the bottom right hand corner, the metal and the semiconductor are brought into contact. When contact is made, electrons can move between the metal and the semiconductor. The chemical potential can be interpreted as the free energy per electron, so because ΦM > ΦS, the electrons in the conduction band of the *n*-type semiconductor can lower their energy by moving from the semiconductor into the metal. Take a moment to navigate through the whole of the animation using the buttons at the bottom of the animation. As you read further through this section, keep returning to this animation as more is gradually explained. The flow of electrons into the metal from the semiconductor on contact in this animation causes a slight negative charging of the metal, thereby repelling further flow of electrons from the conduction band of the semiconductor into the metal. The electric potential generated by the charging of the metal causes a deformation of the energy bands in the semiconductor close to the metal-semiconductor interface. The chemical potential in the semiconductor also falls in this region, as higher energy electrons in the region adjacent to the metal have moved into the metal. The Fermi level of the metal is not appreciably affected because there are over 1010 times more valence electrons in the metal then there are conduction electrons in the semiconductor before contact – the addition of a few extra electrons from the semiconductor clearly makes little difference to the Fermi level in the metal. Far from the metal-semiconductor interface, the band structure in the semiconductor is unaffected except for an overall displacement down in energy. The way to think about this change in the band structure is to consider the bands in the semiconductor to be 'pinned' at the interface with the metal. The chemical potential in the region of semiconductor immediately adjacent to the metal moves in order to be in equilibrium with the metal. This has also to be true at distances in the semiconductor remote from the interface. At a large distance from the interface, the bands therefore have the same position relative to the shifted chemical potential as they did before the metal and semiconductor were joined. The most immediate consequence of making contact with the metal is that a region near to the metal-semiconductor interface is produced in the semiconductor which has no conduction electrons in it – this region is depleted of electrons in the conduction band. This region is therefore called the **depletion layer**, or the **space charge region**. This region is labelled in one of the intermediate steps of the illustration. The depletion layer acts like a potential barrier. This barrier has a height of ΦM − χ for electrons moving from the metal to the semiconductor. The size of the barrier for electrons moving from the semiconductor to the metal is ΦM − ΦS, which is also equal to the total downward displacement of the bands in the semiconductor remote from the metal-semiconductor interface. At a particular finite temperature, there will be electrons in the metal that can be thermally excited enough to overcome the potential barrier and diffuse into the semiconductor conduction band. Likewise, there will be electrons in the conduction band that will have enough thermal energy to diffuse from the semiconductor into the metal. In equilibrium these currents must be equal. If they were not, charging would occur at the interface and the band structure would be deformed until the diffusion currents of the electrons were identical in both directions. The higher potential barrier encountered in moving from the metal to the semiconductor is compensated for by the much larger numbers of free electrons in the metal. In addition to the above **diffusion current**, there is also a **drift current**. The drift current is usually very small, and independent of the height of the potential barrier. It arises through the formation of electron-hole pairs in, or very near to, the depletion layer as a result of thermal excitation. The electron in such an electron-hole pair will be accelerated into the semiconductor (electrons 'roll' downhill), while the hole will be accelerated towards the boundary with the metal (holes 'float up' to the highest point). This contribution to the current is usually very small, and is dependent on the band gap of the semiconductor and the temperature. The total current across the junction is the net sum of the diffusion current and the drift current. At equilibrium, i.e. zero externally applied voltage, the net sum of these two currents is zero. It should be noted that any hole created in the semiconductor near the metal, as a result of the thermal excitation of electrons into the conduction band, which then reaches the boundary with the metal successfully, will ultimately be destroyed by recombining with an electron from the metal. If the metal is now connected to the negative terminal of a battery, and the semiconductor is connected to the positive terminal, there will be a greater negative charge on the metal, and the bands in the semiconductor will be further deformed. This will increase the potential barrier and also the width of the depletion layer. This results in the diffusion current in both directions becoming negligible, because the potential barrier is large for both directions of diffusion. The contribution from the drift current will still remain, producing an extremely small and constant current of electrons from the metal into the positively biased semiconductor. This situation is called **reverse bias**. When the metal is connected instead to the positive terminal of a battery, the negative charge on the metal is slightly reduced, and this will in turn reduce the deformation of the bands in the semiconductor. The resulting reduction in the width of the depletion layer, and the much lower potential barrier for electrons to move from the semiconductor into the metal results in a large net movement of electrons into the metal from the semiconductor, and therefore a net current flow. This is called **forward bias**. If you have not already done so, take another look through the animation earlier in this section. The overall current – voltage characteristic for a rectifying contact of this type is shown in the figure below. Note that the size of the current for reverse bias (negative voltage) has been greatly exaggerated. ![](images/schottky_I_V.jpg) A similar rectifying device can also be made from the junction of a *p*-type semiconductor with a metal. In this case, ΦM has to be smaller than ΦS to form a rectifying contact. This can be represented schematically in a manner similar to that shown above for the *n*-type semiconductor – metal case. The animation below depicts the situation for a *p*-type semiconductor – metal rectifying contact. The major difference here is that the sign of the voltage is reversed for a *p*-type device, i.e., it passes current when the metal is connected to the negative terminal, and blocks current when it is connected to the positive terminal. Rectifying contacts can also be made from a junction between *n*-type and *p*-type semiconducting materials. Such a contact is termed a *p*–*n* junction and is discussed in the section of this TLP entitled . Metal–Semiconductor Junction – Ohmic Contact When a metal and an *n*-type semiconductor are joined and ΦM < ΦS, electrons will flow from the Fermi energy level in the metal into the semiconductor conduction band to lower their energy. This will cause the chemical potential of the semiconductor to move up into equilibrium with that of the metal. It will also deform the semiconductor bands, so that they curve upwards away from the metal. This situation is depicted in the animation below. Use the tabs to navigate through the animation. This type of contact yields a linear relationship between the voltage applied and the current that flows across the junction. It is therefore called an **Ohmic contact**, because it obeys Ohm's law. This type of contact is also described as **metallization**, and is used to supply electric current into semiconductor devices. The *p*–*n* Junction As an alternative to the Schottky Barrier contact described in the section , a junction between an *n*-type semiconductor and a *p*-type semiconductor can be used as a rectifying contact. To see why, browse through the animation below. The various parts of the animation are discussed in detail later in this section, so do not be concerned if you do not understand every stage. You can return to this animation as you read more about the *p*-*n* junction. It should be noted in the above animation that the relative quantity of electrons in the *p*-type material and the relative quantity of holes in the *n*-type semiconductor before they are joined together has been greatly exaggerated for the purposes of illustration. Both of these are **minority carriers** in their respective environments – remember that electrons are the majority carriers in *n*-type semiconductors and that holes are the majority carriers in *p*-type semiconductors. When the two semiconductors are initially joined together, electrons will flow from the *n*-type semiconductor into the *p*-type semiconductor, and holes will flow from the *p*-type semiconductor into the *n*-type semiconductor. The chemical potentials of the two semiconductors will come to equilibrium, and the band structures will be deformed accordingly. A depletion layer is formed at the interface between the two types of doped semiconductor, in which numbers of electrons in the conduction band and holes in the valence band are both significantly reduced. In equilibrium, there is a potential barrier for electrons to diffuse from the *n*-type semiconductor into the *p*-type semiconductor, and also for holes to move from the *p*-type semiconductor into the *n*-type semiconductor. These are the majority carriers. In addition, there will be currents from minority carriers, i.e., holes on the *n*-type side and electrons on the *p*-type side. For example, holes generated as a result of thermal excitation of electrons in the *n*-type semiconductor finding themselves in the depletion layer between the *n*- and *p*-type semiconductors will be swept over to the *p*-type side of the junction by the strong electric field within the depletion layer – since the electric field deters electrons from diffusing from the *n*-type side, it necessarily helps holes entering the depletion layer from the *n*-type side. At equilibrium, the total current across the junction has be the same in both directions, so that the overall net current is zero. Any imbalance in current would mean that the system was not in equilibrium, and the bands would have to deform until the system returned to equilibrium. If the *n*-type region is now connected to the positive terminal of a d.c. source and the *p*-type to the negative side, the bands will be further deformed at the interface, creating larger potential barriers for both electrons and holes to move across the junction and a wider depletion layer (i.e., a wider space charge region). In this situation of reverse bias, the only current is the very small contribution from the drift current arising from the minority carriers on both sides of the junction. When the *n*-type region is connected to the negative terminal of a d.c. source and the *p*-type to the positive side, the depletion layer becomes narrower and the potential barriers are decreased in size. For this forward biasing, there will be a large net flow of electrons from the *n*-type semiconductor into the *p*-type semiconductor, and there will also be a net flow of holes moving into the *n*-type semiconductor from the *p*-type semiconductor. In a *p*-*n* junction rectifier, an increase in the strength of the reverse biasing will eventually lead to an increase in the current that flows. This is because for a sufficiently high field electric field, dielectric breakdown of the semiconductor occurs. The bias at which this occurs is called the breakdown voltage. The overall current – voltage characteristic of the *p*-*n* junction is shown in the diagram below. ![Current-Voltage characteristic of the p-n junction](images/p_n_I_V.jpg) Bipolar Transistor Imagine two p-n junctions being joined back-to-back. This is the basic structure of the bipolar transistor. It is called bipolar because both electrons and holes carry current in the device. Bipolar transistors can occur in either *n*–*p*–*n* or *p*–*n*–*p* configurations. A schematic of a device in the *n*–*p*–*n* configuration is displayed below. Use the buttons to navigate through the animation. There are three contacts to the transistor in the above animation. These are (i) the emitter, (ii) the base and and (iii) the collector. The emitter and the collector are *n*-type doped material, while the base is *p*-doped material. These are often labelled E, B and C. With forward biasing applied to the emitter-base junction, and reverse biasing applied to the base-collector junction, the bands deform as shown in the animation. The potential barrier for an electron to diffuse from the emitter into the base is relatively low. Electrons injected from the emitter then diffuse through the thin base region before being accelerated through the strong reverse biased field between base and collector. Note that while the electrons are in the base region, they are minority carriers. The base has to be thin, so that electrons diffusing through are not lost by recombining with a hole, i.e., a majority carrier within the base region. Not only will electron current reduced when recombination occurs, but heat is also dissipated in this process, which is also undesirable. Varying the base voltage changes the size of the potential barrier for electrons to be transferred from the emitter. In this way, the emitter-base voltage can be used to modulate the flow of current from emitter to collector. The applied voltage can also be used to completely stop the current flowing through the device, in effect using the transistor as a switch. It is this switching function which is used in logic circuits, such as those used in computers. A switch closed is a binary 0 (no current flow from emitter to collector), while a switch open is a binary 1 (current flow occurs between emitter and collector). Metal Oxide Semiconductor Field Effect Transistor (MOSFET) ![Schematic of a MOSFET](images/mosfet.jpg) The metal oxide semiconductor field effect transistor (MOSFET) is one of the cornerstones of modern semiconductor technology. The general structure is a lightly doped *p*-type substrate, into which two regions, the source and the drain, both of heavily doped *n*-type semiconductor have been embedded.The symbol *n*+ is used to denote this heavy doping. The source and the drain are about 1 μm apart. Metallized contacts are made to both source and drain, generally using aluminium. The rest of the substrate surface is covered with a thin oxide film, typically about 0.05 μm thick. The gate electrode is laid on top of the insulating oxide layer, and the body electrode in the above diagram provides a counter electrode to the Gate. The thin oxide film contains silicon dioxide (SiO2), but it may well also contain silicon nitride (Si3N4) and silicon oxynitride (Si2N2O). The *p*-type doped substrate is only very lightly doped, and so it has a very high electrical resistance, and current cannot pass between the source and drain if there is zero voltage on the gate. Application of a positive potential to the gate electrode creates a strong electric field across the *p*-type material even for relatively small voltages, as the device thickness is very small and the field strength is given by the potential difference divided by the separation of the gate and body electrodes. Since the gate electrode is positively charged, it will therefore repel the holes in the *p*-type region. For high enough electrical fields, the resulting deformation of the energy bands will cause the bands of the *p*-type region to curve up so much that electrons will begin to populate the conduction band. This is depicted in the animation below which shows a cross section through the region of the *p*-type material near the gate electrode. Click the button to increase the voltage applied to the gate electrode. The population of the *p*-type substrate conduction bands in the region near to the oxide layer creates a conducting channel between the source and drain electrodes, permitting a current to pass through the device. The population of the conduction band begins above a critical voltage, VT, below which there is no conducting channel and no current flows. In this way the MOSFET may be used as a switch. Above the critical voltage, the gate voltage modulates the flow of current between source and drain, and may be used for signal amplification. This is just one type of MOSFET, called 'normally -off' because it is only the application of a positive gate voltage above the critical voltage which allows it to pass current. Another type of MOSFET is the 'normally-on', which has a conductive channel of less heavily doped *n*-type material between the source and drain electrodes. This channel can be depleted of electrons by applying a negative voltage to the gate electrode. A large enough negative voltage will cause the channel to be closed off entirely. 'Normally-off' MOSFETs are used in a wide variety of integrated circuit applications. AND gates, NOT gates and NAND gates are all made from these type of MOSFETs and are essential components of memory devices. Summary = The purpose of this teaching and learning package has been to give a very basic introduction to semiconductors. By the end of this package you should be conversant with various aspects of terminology used to describe semiconducting materials, such as *n*-type semiconductor, *p*-type semiconductor, electrons, holes, band gap, majority carrier, minority carrier, work function, chemical potential, Fermi level, electron affinity, forward bias, reverse bias, Schottky barriers and Ohmic contacts. You should have an appreciation of what types of materials are semiconductors and what distinguishes wide-band-gap semiconductors from insulators. You should also be able to appreciate how very simple devices made from semiconducting materials such as *p*-*n* junctions and MOSFETs are able to respond to applied voltages. Finally, the textbooks listed in the section should be consulted for greater detail about the different topics covered in this TLP. Questions = ### Quick questions*You should be able to answer these questions without too much difficulty after studying this TLP. If not, then you should go through it again!*1. What are the majority carriers of electric current in B-doped Si? | | | | | - | - | - | | | a | Holes | | | b | Electrons | | | c | Boron ions | | | d | Silicon atoms | 2. At absolute zero, which of these statements describes the position of the chemical potential in an *n*-type semiconductor? | | | | | - | - | - | | | a | It is between the top of the valence band and the acceptor levels. | | | b | It is in the middle of the band gap. | | | c | It is between the middle of the band gap and the bottom of the conduction band. | | | d | It is between the middle of the band gap and the top of the valence band. | | | e | It is between the the donor levels and the bottom of the conduction band. | 3. At room temperature, which of these statements describes the position of the chemical potential in an *n*-type semiconductor? | | | | | - | - | - | | | a | It is between the top of the valence band and the acceptor levels. | | | b | It is in the middle of the band gap. | | | c | It is between the middle of the band gap and the bottom of the conduction band. | | | d | It is somewhere between the middle of the band gap and the top of the valence band. | | | e | It is at the same energy level as the donor energy levels. | 4. Which of the following junctions is **not** rectifying? | | | | | - | - | - | | | a | Metal - *p*-type semiconductor contact where ΦM < ΦS. | | | b | *p*-*n* junction | | | c | Metal - *p*-type semiconductor contact where ΦM > ΦS. | | | d | Metal - *n*-type semiconductor contact where ΦM > ΦS. | 5. The work function of gold is 4.8 eV and the electron affinity of silicon is 4.05 eV. Silicon has a band gap of 1.12 eV at 300 K. What is the barrier height at 300 K preventing the majority carriers in *n*-type silicon from crossing between a piece of *n*-type silicon and gold where the chemical potential of the *n*-type silicon is 0.25 eV below the bottom of the conduction band? | | | | | - | - | - | | | a | 0.75 eV. | | | b | 0.5 eV. | | | c | 1.12 eV. | | | d | 4.05 eV. | | | e | 0.25 eV. | Going further = ### Books There are many excellent textbooks which give an introduction to semiconductors. The following textbooks are just a few examples: * *Solid State Physics*, N.W. Ashcroft and N. D. Mermin. (Harcourt Brace 1976) * *Solid State Physics*, J.S. Blakemore (Saunders 1974) * *The Electronic Structure and Chemistry of Solids*, P.A. Cox (Oxford, 1987) * *Electronic Properties of Materials*, R.E. Hummel (Springer-Verlag 2001) * *Principles of Electronic Materials and Devices*, S.O. Kasap (McGraw-Hill 2006) * *Introduction to Solid State Physics*, C. Kittel (Wiley 1996) * *Electronic Materials Science*, J.W. Mayer and S.S. Lau (Macmillan 1990) * *Introductory Semiconductor Device Physics*, G. Parker (Institute of Physics 2004) * *The Solid State*, H.M. Rosenberg (OUP 1988) * *Lectures on the Electrical Properties of Materials*, L. Solymar and D. Walsh (OUP 1993) * *Solid State Electronic Devices*, B.G. Streetman and S.K. Banerjee (Pearson Prentice Hall 2006) Some of these texts are deliberately written as introductory texts for undergraduates in physics, chemistry, materials science and electrical engineering, while others are meant for graduate students in these areas and lead the reader into research-level material. ### Websites There are now a great many web sites containing material relevant to this TLP. These web sites offer various levels of sophistication in their treatment of semiconductors. Examples are: * * * These three Nobel Prize sites are aimed at the general public and give fairly simple, but useful, treatments. * The Wikipedia source article is a gateway to further articles on semiconductors
Rolling Simulation This simulation concerns the rolling of metal sheet, so as to reduce its thickness. Normally, this process is applied to relatively wide plate or sheet, so that plane strain conditions are created - ie the strain is confined to the sectional plane shown in the simulation, with the width of the sheet remaining unchanged as it is rolled. The length of the sheet therefore increases in proportion to the reduction in its thickness, since plastic deformation of this type occurs with no change in volume. The predictions shown in the simulation (for stress and strain fields, and also for the rolling load and hence the average rolling stress) were obtained by running a Finite Element Model (FEM) for several pre-selected cases. In this type of model, the domain of interest is sub-divided into a number of volume elements, creating a mesh, with the overall deformation taking place such that the response of all of these elements conforms to the governing equations and the imposed boundary conditions. The stress-strain behaviour of the material is pre-specified, and is displayed here for each case, characterised by a (uniaxial) Yield Stress (YS), a constant Work Hardening Rate (WHR) and no strain rate sensitivity. The effects of varying the YS and WHR, the (thickness) Reduction Ratio and the application of tension to the emerging sheet can all be explored by changing the case being displayed. The roll diameter, initial billet diameter and roll rotation rate are all fixed. It's assumed that the rolls are rigid and that there is sticking friction (no sliding) between roll and sheet. The displayed stress and strain fields, and the mesh deformation, relate to the steady state that is set up soon after rolling is started. In order to make the movement clearer, it has been slowed down - a timer is shown in the lower right of the display. Outcomes that can be explored include the rolling load, the creation of residual stresses in the sheet, the distribution of plastic strain etc. The stress displayed can be either the von Mises stress (a scalar), which is given by the square root of half the sum of the squares of the three differences between the principal stresses - see , or one of the normal stresses in rolling (1), through-thickness (2) or transverse (3) directions. There is also an option for displaying the equivalent plastic strain, which is the analogous strain parameter to the von Mises stress. Material properties are taken to be constant, homogeneous and isotropic. The display shown is that of the top half of the system - ie the dotted line at the bottom is a plane of symmetry. **Academic consultant**: Bill Clyne (University of Cambridge) **Content development:** James Dean, David Brook, Alexander Aleschenko **Web development:** Lianne Sallows and David Brook
Aims On completion of this TLP you should: * appreciate that slip occurs on a given slip system when the resolved shear stress on that system reaches a critical value * be able to calculate and predict slip behaviour in simple situations * understand how slip geometry changes as slip proceeds * understand the phenomena of geometric softening and work hardening and their effect on slip in hexagonal close packed crystals, cubic close packed crystals and polycrystalline materials Before you start In this package, we use the Miller three-index notation to describe lattice planes and directions. (For simplicity, we have avoided the use of the Miller-Bravais four-index notation for the description of hexagonal crystal systems). It is assumed that you are familiar with the concept of dislocations, including their structure and movement. It might be useful to look at the teaching and learning package. It would be useful to be familiar with some basic crystal structures and crystallography. Introduction When a single crystal is deformed under a tensile stress, it is observed that plastic deformation occurs by *slip* on well-defined parallel crystal planes. Sections of the crystal slide relative to one another, changing the geometry of the sample as shown in the diagram. ![Diagram illustrating slip in a single crystal](images/diagram001.gif) By observing slip on a number of specimens of the same material, it is possible to determine that slip always occurs on a particular set of crystallographic planes, known as *slip planes*. In addition, slip always takes place along a consistent set of directions within these planes – these are called *slip directions*. The combination of slip plane and slip direction together makes up a *slip system*. Slip systems are usually specified using the Miller index notation. For example, cubic close-packed metals slip on <1 bar1 0>{111}: that is, in directions related to [1 bar1 0] by symmetry and on planes related to (111) by symmetry. The slip direction must lie in the slip plane. Generally, one set of crystallographically equivalent slip systems dominates the plastic deformation of a given material. However, other slip systems might operate at high temperature or under high applied stress. The crystal structure and the nature of the interatomic bonding determine the slip systems that operate in a material. Slip geometry: the critical resolved shear stress = Slip occurs by dislocation motion. To move dislocations, a certain stress must be applied to overcome the resistance to dislocation motion. This is discussed further in the package on this site. It is observed experimentally that slip occurs when the shear stress acting in the slip direction on the slip plane reaches some critical value. This critical shear stress is related to the stress required to move dislocations across the slip plane. The tensile yield stress of a material is the applied stress required to start plastic deformation of the material under a tensile load. We want to relate the tensile stress applied to a sample to the shear stress that acts along the slip direction. This can be done as follows. Consider applying a tensile stress along the long axis of a cylindrical single crystal sample with cross-sectional area A: ![Diagram illustrating application of tensile stress along the long axis of a cylindrical single crystal sample](images/diagram002.gif) The applied force along the tensile axis is F = σA. If slip occurs on the slip plane shown in the diagram, with plane normal **n**, then the slip direction will lie in this plane. We can calculate the *resolved shear stress* acting parallel to the slip direction on the slip plane as follows. The area of the slip plane is A/cosφ, where φ is the angle between the tensile axis and the slip plane normal. The component of the axial force F that lies parallel to the slip direction is F cos λ. The resolved shear stress on the slip plane parallel to the slip direction is therefore given by: \[{\tau \_{\rm{R}}} = \frac{{{\rm{resolved}}\;{\rm{force}}\;{\rm{acting}}\;{\rm{on}}\;{\rm{slip}}\;{\rm{plane}}}}{{{\rm{area}}\;{\rm{of}}\;{\rm{slip}}\;{\rm{plane}}}} = \frac{{F\cos \lambda }}{{A/\cos \varphi }} = \frac{F}{A}\cos \varphi \cos \lambda \] It is found that the value of τR at which slip occurs in a given material with specified dislocation density and purity is a constant, known as the *critical resolved shear stress* τC. This is **Schmid's Law**. The quantity cos φ cos λ is called the *Schmid factor*. The tensile stress at which the crystal starts to slip is known as the yield stress σy, and corresponds to the quantity F/A in the above equation. Symbolically, therefore, Schmid's Law can be written: | | | - | | τC = σy cos φ cos λ | In a given crystal, there may be many available slip systems. As the tensile load is increased, the resolved shear stress on each system increases until eventually τC is reached on one system. The crystal begins to plastically deform by slip on this system, known as the *primary slip system*. The stress required to cause slip on the primary slip system is the *yield stress* of the single crystal. As the load is increased further, τC may be reached on other slip systems; these then begin to operate. From Schmid's Law, it is apparent that the primary slip system will be the system with the *greatest Schmid factor*. It is possible to calculate the values of cos φ cos λ for every slip system and subsequently determine which slip system operates first. This can be time consuming, but for cubic crystal systems, the provide quick routes to identifying the primary slip system. Geometry during slip Two conditions restrict the geometry of a crystal as slip proceeds: * the *spacing of the planes* remains constant; * the *number of planes in the specimen* is conserved. These give rise to two important relationships that describe the way that the orientation of slip planes and slip directions changes as slip proceeds: * **l cos φ** is constant, so that as the specimen length l increases, the angle between the slip plane normal and the tensile axis approaches 90° * **l sin λ** is constant, so that as l increases, the angle between the slip direction and the tensile axis approaches zero. If a crystal is extended from length l0 to length l1, then the angles φ and λ are related as follows: | | | - | | l0 cos φ0 = l1 cos φ1 l0 sin λ0 = l1 sin λ1 | . Slip in HCP metals 1: slip systems In hexagonal close packed (h.c.p.) metals, such as cadmium, slip occurs in <100> type directions on {001} type planes. These correspond to the close packed directions in the close packed planes. Examination of the crystal structure (see the diagram below) shows that there is only one distinct lattice plane of the {001} type, i.e. (001). There are three distinct <100> directions lying in this (001) plane: [100], [010] and [110]. Hence, the h.c.p. structure exhibits three distinct slip systems. The h.c.p structure has only two *independent* slip systems, since any slip on [110](001) can be described entirely as a combination of slip on [100](001) and [010](001). ![Diagram of h.c.p. crystal structure](images/diagram006.gif) Hexagonal close packed crystals slip on <100>{001} slip systems. This diagram shows a 2x2 array of unit cells projected onto the (001) plane. The three slip directions lying in the plane are shown as blue arrows. Slip in HCP metals 2: application of Schmid's Law = The analysis of slip in h.c.p. crystals can be demonstrated by considering the special case in which the slip direction, the tensile axis and the normal of the slip plane are all coplanar. In this special case, φ + λ = 90°. This condition is **not generally true**. For example, consider a cadmium single crystal strained with the tensile axis along the [021] direction. Cadmium is (approximately) hexagonal close-packed with a = b = 2.98 Å and c = 5.62 Å, and it slips on <100>{001}. The slip plane must be (001), since that is the only plane of the {001} type. Hence, the slip plane normal is parallel to [001]. The tensile axis [021] and the slip plane normal [001] both lie in the (100) plane. The operating slip system will be that with the highest resolved shear stress acting upon it. By considering the geometry, shown in the diagram below, it is apparent that the primary slip system (*i.e.* the system with the greatest Schmid factor) in h.c.p. crystals will be [010](001) for all tensile axes of the [0VW] type. ![Diagram illustrating slip geometry](images/diagram007.gif) Geometry of slip in a hexagonal close-packed crystal system. The angle between the [100] and [010] directions is 120°. The three possible slip systems are [100](001), [010](001) and [110](001). For a tensile axis lying in the (100) plane of the type [0VW], the angle between the slip direction and the tensile axis is smallest for the [010] slip direction. cos φ cos λ is largest when λ = λ[010], hence the operating slip system is [010](001). Slip in HCP metals 3: calculation of forces = Having determined which slip system will operate first, it should now be possible to calculate the minimum force needed to cause plastic flow during the application of a stress to the crystal. The calculation proceeds as follows. The following diagram shows the orientation of the [021] tensile axis with respect to the unit cell vectors **b** and **c** (parallel to [010] and [001] respectively). These three vectors all lie in the (100) plane. ![Diagram showing orientation of the [021] tensile axis with respect to the unit cell vectors b and c](images/graph1.gif) Identification of the angles λ and φ. The diagram shows coplanar vectors on the (100) plane. The (001) slip plane lies horizontal and extends perpendicular to the screen. The initial angle between the tensile axis and the slip plane normal, φ0, is \[{\varphi \_0} = {\tan ^{ - 1}}\left( {\frac{{2a}}{c}} \right) = {\tan ^{ - 1}}\left( {\frac{{2 \times 2.98}}{{5.62}}} \right) = 46.7^\circ \] and the angle between the tensile axis and the slip direction, λ0, is λ0 = 90° - φ0 = 43.3° The Schmid factor for the [010](001) slip system is therefore cos φ0 cos λ0 = cos 46.7° × cos 43.3° = 0.499 If the critical resolved shear stress for cadmium is 0.15 MPa, and the initial crystal diameter is 3 mm, then the force required to cause slip can be calculated:\[{\tau \_c} = {\sigma \_y}\cos {\varphi \_0}\cos {\lambda \_0} = \frac{F}{A}\cos {\varphi \_0}\cos {\lambda \_0}\] \[F = \frac{{{\tau \_c}A}}{{\cos {\varphi \_0}\cos {\lambda \_0}}} = \frac{{0.15 \times {{10}^6} \times \pi \times {{(1.5 \times {{10}^{ - 3}})}^2}}}{{0.499}} = \underline{\underline {2.12{\rm{ N}}}} \] Now consider what happens when the crystal is plastically extended. If the original length of crystal was l0 = 50 mm, and the crystal is extended by 50 % to l1 = 75 mm, then: * The *diameter* of the crystal will decrease as volume is conserved: l0d02 = l1d12 \[{\rm{thus}}\;{d\_1} = \sqrt {\frac{{{l\_0}{d\_0}^2}}{{{l\_1}}}} = \sqrt {\frac{{50 \times {3^2}}}{{75}}} = \underline{\underline {2.45{\rm{ mm}}}} \] * The *geometry* of the slipped crystal changes according to: l0 cos φ0 = l1 cos φ1 and l0 sin λ0 = l1 sin λ1 hence: φ1 = 62.8° and λ1 = 27.2° and so the new Schmid factor for the extended crystal is: cos φ1 cos λ1 = 0.407 The force required to cause further deformation of the crystal in this condition can be calculated as before: \[F = \frac{{{\tau \_c}A}}{{\cos {\varphi \_1}\cos {\lambda \_1}}} = \frac{{0.15 \times {{10}^6} \times \pi \times {{(1.225 \times {{10}^{ - 3}})}^2}}}{{0.407}} = \underline{\underline {1.{\rm{74 N}}}} \] The force required to cause slip is *lower* after the crystal has been deformed. This phenomenon is known as *geometric softening* - once deformation has started, less load is required to further deform the crystal. Geometric softening depends heavily on the orientation of the tensile axis within the crystal. For some orientations, no geometric softening is observed. There are other factors that control slip, which will not be discussed here, and these can dominate over the geometric factor. Slip in HCP metals 4: observing slip in cadmium = Cadmium is a hexagonal close-packed metal (with a non-ideal axial ratio *c/a*: ). We have seen that it slips on <100>{001}. Cadmium single crystals can be grown in the form of long cylinders. The crystal orientation of each crystal is random - *i.e.* no particular orientation is favoured. The specimen can be deformed under *extension control*, where the crystal is extended by a given amount and the load required to achieve that extension is recorded. Typical load-extension curves obtained from such an experiment are shown below. *Geometric softening* may or may not be observed, depending on the geometry of the crystal. ![Graph of load against extension](images/graph2.gif) Schematic load-extension curves for tensile deformation of two cylindrical cadmium crystals with the same initial diameter but different crystallographic orientations. After an initial period of elastic deformation, both begin to plastically deform. The solid line shows the behaviour of a crystal showing geometric softening, where the load decreases over part of the extension. The broken line shows a sample where no geometric softening occurs. The two crystals yield at different loads because the Schmid factors for the primary slip systems are different but τc is constant for the material. Both crystals have the same Young's modulus. As the crystal deforms, the geometry of the crystal changes according to the relationships given in . The intersection of the slip planes with the surface of the crystal gives rise to steps and bands on the surface, which can be seen in the optical and scanning electron microscopes.Scanning electron micrograph of undeformed cadmium single crystal. (Click on image to view larger version.) Cadmium crystal after deformation to 100 % strain. (Click on image to view larger version.) High magnification image of slip steps in cadmium crystal after deformation to 200 % strain. The tensile axis runs approximately from the bottom left corner to the top right corner of the image. (Click on image to view larger version.) Video clips of a crystal undergoing deformation in the SEM are available on the following page. Video clips of slip in a single cadmium crystal = The following video clips were made as a single crystal of cadmium underwent deformation in a scanning electron microscope (SEM) chamber. | | | | - | - | | Your browser does not support the video tag. A single cadmium crystal undergoing deformation in an SEM: appearance of slip steps as slip begins on a previously undeformed crystal | Your browser does not support the video tag. A single cadmium crystal undergoing deformation in an SEM: appearance of slip steps as slip begins on a previously undeformed crystal | | Your browser does not support the video tag. A single cadmium crystal undergoing deformation in the SEM: slip proceeding on previously deformed crystal |The trace of the slip planes can be seen. Markers have been dropped onto the surface of the crystal to allow the relative movement of parts of the crystal to be clearly seen. Your browser does not support the video tag. A single cadmium crystal undergoing deformation in the SEM: slip proceeding on previously deformed crystalAn unusually large slip step can be seen on the left-hand edge of the specimen. As slip proceeds, the slip planes rotate towards the tensile axis, but the large slip step does not increase in size. Your browser does not support the video tag. A single cadmium crystal undergoing deformation in the SEM: the crystal fractures under tensile loadThe mode of fracture is ductile tearing. Note how the angle of the fracture surfaces mimics the angle of the slip planes in the deformed crystal. Exercise: Determination of the critical resolved shear stress for slip in cadmium = The following sequence of images shows slip in a cadmium crystal after deformation to 40 % and 100 % of its original length. By measuring the angles φ and λ at each stage of deformation, the slip geometry relationships *l*0 cos φ0 = *l*1 cos φ1 and *l*0 sin λ0 = *l*1 sin λ1 can be tested. ... Slip in CCP metals Cubic close-packed (c.c.p.) crystals have slip systems consisting of the close-packed directions in the close-packed planes. The shortest lattice vectors are along the face diagonals of the unit cell, as shown below: ![Diagram of slip directions in a c.c.p. unit cell](images/diagram009.gif) Slip systems in c.c.p. on the (111) plane. There are 3 distinct slip directions lying in this plane, and there are 3 other planes of the {111} type, making 12 distinct slip systems of the <1 bar1 0>{111} type. The cubic symmetry requires that there be many distinct slip systems, using all <1 bar1 0> directions and {111} planes. There are 12 such <1 bar1 0>{111} systems, five of which are independent. Note that on a given slip system, slip may occur in either direction along the specified slip vector. The c.c.p crystal therefore has many more slip systems than an h.c.p. crystal, and slip progresses through three stages: ![Graph of resolved shear stress against shear strain during deformation of a c.c.p. single crystal](images/graph3.gif) Plot of the resolved shear stress *τ*R acting on one slip system against the shear strain γ during deformation of a c.c.p. single crystal. The *initial elastic strain* is caused by the simple stretching of bonds. Hooke's Law applies to this region. At the yield point, *stage I* begins. The crystal will extend considerably at almost constant stress. This is called *easy glide*, and is caused by slip on one slip system (the *primary* slip system). The geometry of the crystal changes as slip proceeds. The Schmid factor changes for each slip system, and slip may begin on a second slip system when its Schmid factor is equal to that of the primary slip system. In this stage of deformation, known as *stage II*, dislocations are gliding on two slip systems, and they can interact in ways that *inhibit* further glide. Consequently, the crystal becomes more difficult to extend. This phenomenon is called *work hardening*. The stress / strain ratio in stage II may be constant. . *Stage III* corresponds to extension at high stresses, where the applied force becomes sufficient to overcome the obstacles, so the slope of the graph becomes progressively less steep. The work hardening *saturates*. Stage III ends with the failure of the crystal. Summary = In this teaching and learning package, we have seen how the phenomenon of plastic deformation proceeds by slip. This involves dislocation motion in specific directions on specific planes, which in combination are known as *slip systems*. The observed yield stress of a single crystal is related to the geometry of the crystal structure *via* Schmid's Law: | | | - | | τC = σy cos φ cos λ | where τC is the *critical resolved shear stress* which is related to the stress required to move dislocations across the slip plane. The macroscopic behaviour of cadmium crystals was examined as an example of slip in a hexagonal close-packed metal. It was demonstrated that the orientation of the crystal with respect to the tensile axis is crucial in determining the behaviour of a single crystal undergoing deformation. Microscopic slip steps were observed on the crystals, which confirm the geometry of slipand show that certain geometrical relationships are obeyed as slip proceeds. The crystal structure of the material can affect the nature of slip. We have seen how cubic close-packed metals undergo work hardening due to the simultaneous operation of several slip systems - this mechanism cannot occur in hexagonal close-packed crystals unless unusual slip systems operate. In polycrystalline materials, the distribution of grain orientations and the constraint to deformation offered by neighbouring grains gives rise to a simplified overall stress-strain curve in comparison to the curve from a single crystal sample. Crystal structure is also important in polycrystalline samples - the von Mises criterion states that a minimum of five independent slip systems must exist for general yielding. Questions = ### Quick questions*You should be able to answer these questions without too much difficulty after studying this TLP. If not, then you should go through it again!*1. A cubic close-packed (cubic F) metal is deformed under tension. The tensile axis lies along [2 -3 1]. What is the primary slip system? | | | | | - | - | - | | | a | [0 -1 1] (1 -1 -1) | | | b | [0 1 1] (1 -1 1) | | | c | [1 -1 0] (1 1 -1) | | | d | [1 1 0] (1 -1 1) | 2. Cadmium is hexagonal close-packed and slips on <100>{001} slip systems. As dislocations glide in a cadmium single crystal during plastic flow, steps form at the edges of the crystal where the dislocations reach the surface. What height will the slip step arising from the arrival of one single dislocation at the surface of the crystal be, in terms of the lattice parameters *a* and *c*? | | | | | - | - | - | | | a | *a* ∗√(3/4) | | | b | *a* | | | c | *c* | | | d | *a* / 2 | 3. An amorphous solid is deformed under tension. Which of the following statements describes its behaviour best? | | | | | - | - | - | | | a | Dislocations glide through the material, resulting in bulk plastic deformation. | | | b | There is virtually no plastic deformation by slip, since dislocation movement through the structure is very difficult. | | | c | There is no plastic deformation by slip, since dislocations cannot exist in an amorphous material. | | | d | Dislocations can move through the solid, but there are no defined slip planes so plastic deformation tends to occur by other mechanisms. | 4. In hexagonal and cubic close-packed crystal structures, slip occurs along close-packed directions on the close-packed planes. Body-centred cubic metals are also ductile through the mechanism of slip, but they have no close-packed planes. What slip systems *do* b.c.c. crystals slip on? | | | | | - | - | - | | | a | <1 -1 1>{110} | | | b | <1 -1 1>{111} | | | c | <1 -1 0>{110} | | | d | <001>{110} |### Deeper questions*The following questions require some thought and reaching the answer may require you to think beyond the contents of this TLP.*5. Find the Schmid factor for the primary slip system in a cubic close-packed single crystal when the tensile axis is parallel to [3 4 -1]. 6. This question refers to the involving deformation of a cadmium single crystal.The experimental data from the experiment are given in the table below. The angles φ and λ at the two stages of elongation were measured from the diagrams in the exercise. Determine what the values of φ and λ must have been in the crystal before deformation. (Assume that only one slip system operated.) The values of *l*cosφ and *l*sin*λ have been calculated for you.*| | | | | | | | - | - | - | - | - | - | | Percent strain | Sample length / mm | φ | *l*cosφ / mm | λ | *l*sinλ / mm | | 0 % | *l*0 = 18 | φ0 | *l*0cosφ0 | λ0 | *l*0sinλ0 | | 40 % | *l*1 = 25 | φ1 = 60° | *l*1cosφ1 = 12.5 | λ1 = 30° | *l*1sinλ1 = 9.3 | | 100 % | *l*2 = 36 | φ2 = 75° | *l*2cosφ2 = 9.3 | λ2 = 15° | *l*2sinλ2 = 12.5 | 7. A polycrystalline sample of a cubic close-packed metal is deformed under tension along a tensile axis parallel to [134]. If the critical resolved shear stress τc = 95 kPa, estimate the yield stress σy of the sample, assuming that there are no obstacles to dislocation motion and that the grains have random orientation relative to one another. 8. Sometimes plastic deformation occurs without slip. Suggest mechanisms by which plastic deformation could occur without slip in the following circumstances: 1. At elevated temperature with a very low strain rate. 2. In an h.c.p. polycrystalline sample with only 3 independent slip systems. 3. In a semi-crystalline polymer sample. Going further = ### Books Most general 'engineering materials' books cover relevant topics. For example, try: * Newey C and Weaver G, *Materials: Principles and Practice*, Open University and Butterworths, 1990. * Weidmann G, Lewis P and Reid N, *Structural Materials*, Open University and Butterworths, 1990. Also consider looking at: Kelly A and Knowles K M, Crystallography and Crystal Defects (second edition), John Wiley, 2012 a comprehensive (and mathematically detailed) exploration of the relationship between the crystal structure and properties of solids. ### CD-ROM and websites The MATTER Project's 'Materials Science on CD-ROM' includes relevant modules on: * Introduction to Crystallography * Dislocations See the for details of availability.
Aims On completion of this TLP you should: * understand the concept of a solid solution * understand the thermodynamic principles behind solid solutions * be aware of different types of solid solution and factors affecting the extent of solid solution * understand how the interaction between the constituent components determines phase separation or ordering * be aware of the presence of solid solutions on phase diagrams Before you start There are no specific prerequisites for this TLP, but it would be useful to have a basic knowledge of crystal structures, phase diagrams and thermodynamics. Take a look at the and TLPs. Introduction The extent to which the components of an alloy are miscible depends on the interaction between the atoms: * If the species do not tend to bond to each other, then separate phases will form with limited miscibility * If strong mutual attraction occurs, a single crystal of a different structure can form, such as in intermetallic compounds * If there is little difference between like and unlike bonds, then a *solid solution* can occur, over a wide range of chemical compositions In these solid solutions, different types of atoms or molecules exist in the same crystal lattice. A good example of a solid solution is the Cu-Ni system, for which the phase diagram is shown below.Phase diagram for the Cu-Ni system Solid solutions = A solid solution is a single phase which exists over a range of chemical compositions. Some minerals are able to tolerate a wide and varied chemistry, whereas others permit only limited chemical deviation from their ideal chemical formulae. In many cases, the extent of solid solution is a strong function of temperature, with solid solution being favoured at high temperatures and unmixing and/or ordering favoured at low temperatures. Types of solid solution: 1. *Substitutional solid solution*: chemical variation is achieved simply by substituting one type of atom in the structure by another. 2. *Coupled substitution*: this is similar to the substitutional solid solution, but in a compound cations of different valence are interchanged. To maintain charge balance, two coupled cation substitutions must take place. 3. *Omission solid solution*: chemical variation is achieved by omitting cations from cation sites that are normally occupied. 4. *Interstitial solid solution*: chemical variation is achieved by adding atoms or ions to sites in the structure that are not normally occupied. Factors affecting the extent of solid solution: 1. *Atomic/ionic size*: If the atoms or ions in a solid solution have similar ionic radii, then the solid solution is often very extensive or complete. Generally, if the size difference .is less than about 15%, then extensive solid solution is possible. For example, Mg2+ and Fe2+ have a size mismatch of only about 7%, and complete solid solution between these two elements is observed in a wide range of minerals. However, there is a 32% size difference between Ca2+ and Mg2+, and we expect very little substitution of Mg for Ca to occur in minerals. 2. *Temperature*: High temperatures favour the formation of solid solutions, so that endmembers which are immiscible at low temperature may form complete or more extensive solid solutions with each other at high temperature. High temperatures promote greater atomic vibration and open structures, which are easier to distort locally to accommodate differently-sized cations. Most importantly, solid solutions have a higher entropy than the endmembers, due to the increased disorder associated with the randomly distributed cations, and at high temperatures, the -TS term in the Gibb's free energy stabilises the solid solution. 3. *Structural flexibility*: Although cation size is a useful indicator of the extent of solid solution between two endmembers, much depends on the ability of the rest of the structure to bend bonds (rather than stretch or compress them) to accommodate local strains. 4. *Cation charge*: Heterovalent substitutions (i.e. those involving cations with different charges) rarely lead to complete solid solutions at low temperatures, since they undergo complex cation ordering phase transitions and/or phase separation at intermediate compositions. These processes are driven by the need to maintain local charge balance in the solid solution as well as to accommodate local strain. Olivine = Olivine is a name for a series of minerals with the formula M2SiO4, where M is most commonly Fe or Mg. Fayalite (Fe2SiO4) and forsterite (Mg2SiO4) form a substitutional solid solution where the iron and magnesium atoms can be substituted for each other without significantly changing the crystal structure. As mentioned previously, there is a size mismatch of only about 7% between Mg2+ and Fe2+, so complete solid solution between these two elements is observed in olivine. Olivine has an orthorhombic structure. A typical set of lattice parameters for an unspecified composition are: *a* = 0.49 nm, *b* = 1.04 nm, *c* = 0.61 nm. The structure consists of isolated SiO44- tetrahedra, which are held together by M cations occupying two types of octahedral site (M1 and M2). The isolated tetrahedra point alternately up and down along rows parallel to the *z*-axis. Alternatively, the structure can be described as an approximately hexagonal close-packed array of oxygen anions, with M cations occupying half of the octahedral sites, and Si cations occupying one eighth of the tetrahedral sites. If the hexagonal close packing were ideal, the M1 and M2 sites would be regular octahedra, and identical in size, but since the packing is not ideal, the M2 sites are slightly larger and more distorted than the M1. A plan view of the structure projected along the *x*-axis is shown below.Plan view of the structure of olivine (click on image to view larger version) Your browser does not support the video tag. Rotating VR model of the olivine structure: oxygen atoms are shown in red, silicon atoms in blue, M cations occupying M1 sites in yellow, and M cations occupying M2 sites in purple Thermodynamics of solid solutions = For an introduction to the basics of thermodynamics, look at the TLP. ### Entropy The entropy of the two endmembers, A and B, of a solid solution are SA and SB, and are mainly vibrational in origin (i.e. related to the structural disorder caused by thermal vibrations of the atoms at finite temperature). The entropy of the solid solution will always be greater than the entropy of the mechanical mixture. ![Graph of entropy vs mole fraction of B](images/image003.gif) The entropy of the mechanical mixture is given by: S = xASA + xBSB ![Graph of entropy of mixing vs mole fraction of B](images/image004.gif)The excess entropy is called the *entropy of mixing* (ΔSmix), and is mainly configurational in origin (i.e. it is associated with the large number of energetically-equivalent ways of arranging atoms/ions on the available lattice sites). The configurational entropy is defined as: S = k lnW where k is Boltzmann's constant (1.38 x 10-23 JK-1), and W is the number of possible configurations. If it is assumed that the entropy of mixing is equal to the configurational entropy, ΔSmix = k lnW If we consider mixing NA A atoms and NB B atoms on N lattice sites at random, then the number of different configurations of A and B cations is given by: \[w = \frac{{N!}}{{{N\_A}!{N\_B}!}} = \frac{{N!}}{{({x\_A}N)!({x\_B}N)!}}\] where xA and xB are the mole fractions of A and B respectively. Hence \[\Delta {S\_{{\rm{mix}}}} = k\ln \frac{{N!}}{{({x\_A}N)!({x\_B}N)!}} = k[\ln N! - \ln ({x\_A}N)! - \ln ({x\_B}N)!]\] Stirling's approximation states that, for large N: lnN! ≈ N lnN − N Hence ΔSmix = k(N lnN - N) - k(xAN lnxAN - xAN) - k(xBN lnxBN - xBN) ΔSmix = -*Nk*(*-*lnN + 1 *+* *x*Aln*x*A + *x*AlnN - *x*A + xBlnxB + xBlnN - xB) ΔSmix = -*Nk*(xAlnxA + xBlnxB + (xA + xB)lnN - (xA + xB) - lnN *+* 1) Since xA + xB = 1, ΔSmix = -*Nk*(xAlnxA + xBlnxB) If N is taken to be equal to Avogadro's number, then per mole of sites: ΔSmix = -*R*(xAlnxA + xBlnxB) ### Enthalpy The enthalpy of the two endmembers of the solid solution, A and B, are equal to HA and HB respectively. For a mechanical mixture of these two endmembers, the enthalpy is given by: H = xAHA + xBHB The excess enthalpy relative to the mechanical mixture is known as the *enthalpy of mixing*, ΔHmix. This can either be positive or negative, or zero. ![Graph of enthalpy of mixing vs mole fraction of B](images/image005.gif) If ΔHmix = 0, the solution is said to be ideal, and for ΔHmix ≠ 0, the solid solution is said to be non-ideal. A simple expression for the enthalpy of mixing can be derived by assuming that the energy of the solid solution arises only from the interaction between nearest-neighbour pairs. ![Graph of enthalpy vs mole fraction of B](images/image006.gif) Let *z* be the coordination number of the lattice sites on which mixing occurs. If the total number of sites is N, then the total number of nearest-neighbour bonds is 0.5*Nz.* (The factor of 0.5 arises since there are two atoms/ions per bond). Let the energy associated with A-A, B-B and A-B nearest-neighbour pairs be WAA, WBB and WAB respectively. If the cations are mixed randomly, then the probability of A-A, B-B and A-B neighbours is xA2, xB2 and 2xAxB respectively. Hence the total enthalpy of the solid solution is given by: H = 0.5 *Nz*(xA2WAA + xB2WBB + 2xAxBWAB) This can be rearranged to: H = 0.5 *Nz*(xAWAA + xBWBB) + 0.5 *Nz*xAxB(2WAB-WAA -WBB) The first term in this expression is equal to the enthalpy of the mechanical mixture. Hence: ΔH = 0.5 *Nz*xAxB(2WAB-WAA -WBB) = 0.5 *Nzx*AxBW where W (= 2WAB-WAA -WBB) is known as the *regular solution interaction parameter*, and its sign determines the sign of ΔHmix. A positive value of W indicates that it is energetically more favourable to have A-A and B-B neighbours, rather than A-B neighbours. In order to maximise the number of A-A and B-B neighbours, the solid solution unmixes into A-rich and B-rich regions. This process is called *exsolution*. A negative value of W indicates that it is energetically more favourable to have A-B neighbours, rather than A-A or B-B neighbours. To maximise the number of A-B neighbours, the solid solution forms an ordered compound. Free energy = The free energy of mixing is defined as: ΔGmix = ΔHmix - TΔSmix The variation in free energy as a function of composition and temperature can be considered for three different situations: an ideal solution, a non-ideal solution with a positive enthalpy of mixing, and a non-ideal solution with a negative enthalpy of mixing. **1. Ideal solid solution, ΔHmix = 0** In this case, ΔGmix = -TΔSmix, and since ΔSmix is always positive, ΔGmix is always negative. At any composition, the free energy of the single-phase solid will be lower than the combined free energy of any mixture of the two separate phases, as shown in the diagram below. The solid solution is stable as a single phase, with disordered cation distribution at all compositions and temperatures. ![Graph of free energy of mixing vs mole fraction of B](images/image008.gif) **2. Non-ideal solid solution, ΔHmix > 0** At high temperatures, the *-*TΔSmix term dominates, and the free energy curve resembles that of the ideal solution. As the temperature decreases, the ΔHmix term and the *-*TΔSmix term become similar in magnitude and the resulting free energy curve shows two minima and a central maximum. ![Graph of free energy of mixing vs mole fraction of B](images/image009.gif) The common tangent rule can be used to determine the equilibrium state of the solid solution. The common tangent will touch the free energy curve at C and D, and for bulk compositions between these points, the free energy of the single-phase solution is higher than that of a mixture of C and D. Hence at equilibrium, the system will minimise its free energy by exsolving to two phases with compositions C and D. For compositions outside C and D, the solid solution will still be stable as a single phase since it has a lower free energy than a mixture of two phases. **3. Non-ideal solid solution, ΔHmix < 0** In this case, there is a strong driving force for ordering when the A:B ratio is about 1:1. The fully-ordered phase has zero configurational entropy, because there are only two ways to arrange the atoms: the ordered and anti-ordered states (which are equivalent). However, this state has a low enthalpy, due to the energetically-favourable arrangement of ions, which stabilises the ordered phase at low temperatures. In contrast, the fully disordered solid solution has a high configurational entropy, which stabilises it at high temperatures. At a certain critical temperature, there will be a phase transition from the ordered to the disordered phase. Exsolution in phase diagrams A binary solid solution can show a phase diagram as follows: ![Phase diagram](images/image010.gif) The solidus and liquidus separate the single phase regions. This is similar to the Cu-Ni phase diagram. If the system demonstrates *exsolution*, then there will be a region in the solid state where two solid phases form from the solid solution. This is common and most solid solutions demonstrate it to some degree. ![Phase diagram](images/image011.gif) . In a eutectic phase diagram, the two-phase solid region meets the solidus/liquidus. ![Phase diagram](images/image012.gif) Solid solution regions are usually denoted with Greek letters, for example α for ferritic iron and γ for austenitic iron. Demonstration of phase separation = ![Schematic diagram of phase separation](images/image015.gif) The transition from a single phase to two phases (and vice-versa) can be easily demonstrated using a mixture of two suitable liquids. A mixture of cyclohexane and aniline can exist as two separate phases or as a single phase, depending on the temperature. The thermodynamic transition between these two states can be understood by considering the balance between entropy and enthalpy (see ). Aniline and cyclohexane are immiscible over a wide range of compositions below about 35 ºC. In this immiscible region there exists an aniline-rich and a cyclohexane-rich phase, separated by a boundary, seen as a meniscus. When this mixture is heated, the volume of one phase increases at the expense of the other. This can be seen as movement of the meniscus, provided the heating is slow enough. At the transition temperature for the particular composition, there will no longer be two discernable phases. A significant point occurs when the distinction between the two coexisting phases reduces to zero. Here the domains present in the mixture can switch easily between aniline-rich and cyclohexane-rich. The composition variance is on such a scale as to interfere with light passing through it. Light will scatter in proportion to the squared difference in *n*, the index of refraction, of the two phases. Therefore light scatters more as the number of domain interfaces increases. Hence light is scattered strongly by the mixture across a small temperature range around the transition temperature. ![Phase diagram](images/image016.gif) At the *critical point*, the scattering is so intense that the system becomes opaque. This phenomenon is called *critical opalescence*. The domains demonstrate some interesting properties, such as fractal shapes, and there is a peak in the heat capacity. This critical transition temperature is a maximum with respect to the composition. Thus it can be determined by interpolating transition temperatures from known compositions. ### Demonstration A mixture of equal quantities of cyclohexane and aniline contained in a sealed vial is heated to approximately 35 ºC (i.e. just above the critical temperature), using a water bath (or water from a hot tap). The mixture is allowed to cool, and a laser is pointed at the vial so that it shines onto a screen opposite, as shown in the diagram below. ![Diagram of apparatus](images/image017.gif) When the critical temperature is reached and the mixture goes from a single phase to two phases, the spot of light on the screen is disrupted as the phases separate. The spot ‘flickers' and then becomes totally diffuse. It will eventually form a single spot again once the transition is completed and the two chemicals have completely separated. The pattern of events can also be seen in reverse as the mixture is heated. Your browser does not support the video tag. Video showing the laser light (as seen on the screen) flickering and spreading out to become completely opaque as the mixture cools through the transition temperatureThe video has been speeded up by a factor of about 10. Your browser does not support the video tag. Video showing the vial, which has been filmed perpendicular to the direction of the laser light (left to right)Initially, the laser light is seen as a single beam passing through the mixture, but as the transition point is reached, the beam spreads out. The single beam eventually reforms, once the transition is complete. This video has been speeded up by a factor of about 200. Precipitates from solid solution The precipitation of a solid phase from a liquid matrix is governed by a balance between the thermodynamic driving force and the energy penalty for creating new solid-liquid surface interfaces. This determines the size and shape of the precipitates. The precipitation of a solid phase from a solid parent phase is very similar. There are various types of interface between solid phases: 1. *Coherent* - there is perfect registry of the lattices. 2. *Coherent with strain* - it is quite likely for there to be some strain with the interface, due to imperfect matching. The strain energy increases with the size of the growing particle, and there is a transition to a semi-coherent interface. 3. *Semi-coherent interface* - the introduction of dislocations reduces the strain energy (but they themselves contribute to the energy of the system). 4. *Incoherent* - there is no matching of the interface. | | | | - | - | | Diagram of coherent interface Coherent | Diagram of coherent with strain interface Coherent with strain | | Diagram of semi-coherent interface Semi-coherent | Diagram of incoherent interface Incoherent | In general, the interfacial free energy will be minimised with better matching of the two phases. Incoherent interfaces have high energy and are relatively mobile because of the greater freedom of atomic motion. The stresses present in the parent matrix as the precipitate grows strongly influences the shape of the precipitate. By modelling the precipitate as an ellipsoid of revolution, the following graph shows how the strain energy is related to the shape. ![Graph of strain energy against axial ratio](images/image022.gif) Growth as discs or plates is clearly preferred. A precipitate particle will likely have some coherent and some incoherent interfaces with the matrix. The greater mobility of the incoherent interfaces leads to faster growth in these directions. This anisotropic growth leads to plate and disc morphologies. The bounding coherent interfaces will be parallel to crystallographic planes in the matrix. ![Diagram](images/image023.gif) Solid solution precipitation/exsolution is used to strengthen many alloys. This is known as precipitation hardening, or *age hardening*. It involves quenching an alloy to a supersaturated state (where the amount of dissolved solute is greater than the equilibrium amount predicted by the phase diagram). A heating schedule can then be applied to control the nature of the precipitation. For example in Al-Cu, a very fine dispersion of θ particles hardens the α phase. This age-hardened alloy is used in aerospace applications.At lower temperatures, it is preferable to have incoherent precipitates, as the greater strains produce more resistance on dislocation motion. At higher temperatures however, the greater mobililty of the incoherent interfaces allows larger particles to grow at the expense of smaller ones (called *coarsening*), and the system becomes less effective at strengthening. So for high temperature use, coherent particles are used, such as the γ' precipitate in nickel-based superalloys (here a phenomenon called order hardening provides the strengthening mechanism, rather than the strain fields).Ceramics, which are typically brittle, can also benefit from solid solution precipitation. Zirconia based compounds can be toughened by having tetragonal structure particles in the monoclinic matrix. Propagation of cracks through the zirconia requires the transformation of these precipitates to the monoclinic form. This requires an input of energy, provided by the stress, hence the material is toughened. Another use of solid solution precipitation lies in nano-materials. By precipitating from a solid solution, the nanometre (10-9m) scale of the microstructure can provide many beneficial effects. For example, the presence of 5 nm diameter carbide particles in steel piano wires helps make it the world's strongest structural material (with some ductility). Precipitation from a liquid phase is too fast to produce such small scale particles. Monte Carlo simulation Computer modelling is a useful tool in representing and predicting atomic processes. A simulation has been created using IGOR Pro, a powerful graphing, data analysis, and programming tool for scientists and engineers produced by . IGOR Pro is available for both Windows and Macintosh and a demo version can be downloaded from the WaveMetrics website. If you have access to a computer with IGOR Pro you can download the simulation, open it in IGOR Pro and experiment with it.If you do not have access to a computer with IGOR Pro, but have a high-speed connection to the Internet, you can run simulation movies generated by IGOR Pro for a limited number of parameter settings on the next page (but read this one first). This simulation uses a 100x100 grid to represent a square 2-dimensional atomic array. This solid solution consists of two different types of atom: A and B. The A atoms can lie on two different types of site (alpha - yellow and beta - blue). The B atoms are black. The program demonstrates the expected action of the atomic system under the chosen input parameters. The simulation uses a statistical approach (a Monte Carlo method, using random numbers to generate possible future outcome scenarios) to predict the atomic "jumps" within the solid solution, according to the interaction parameter and the temperature. By considering the change in free energy with mixing, the temperature and the interaction parameter determine whether one phase or two phases are most stable. The influence of the temperature on the equilibrium state can be seen in the solvus in the phase diagram. ![Phase diagram](images/image024.gif) If the interaction parameter is positive, then A-A and B-B interactions will be energetically more favourable than A-B interactions. The positive enthalpy of mixing generates a tendency for the solution to form A-rich and B-rich regions (*exsolution*). This phase separation produces A-rich and B-rich phases. At lower temperatures, more significant phase separation occurs. This gives larger regions of the two different phases, each of which are closer in composition to the end-members, A and B, due to the position of the solvus. Above the solvus temperature (which is around 375K for a composition of 50% A - 50% B), the solution remains fairly random, although some short range order can be seen, especially closer to the solvus, with preference for same bond types. At a lower composition of A (40% A - 60% B), the B regions are relatively larger. ![Phase diagram](images/image025.gif) If the interaction parameter is negative, then A-B interactions will be energetically more favourable than A-A and B-B interactions. The negative enthalpy of mixing generates a tendency for the solution to form ordered compounds. The ordered state of this system has a chessboard-like appearance, maximising the number of A-B bonds. By distinguishing the different types of A site (alpha - yellow and beta - blue), it is possible to see regions of different *phase*, i.e. ordered regions and anti-ordered regions. These *anti-phase domains* are separated by *anti-phase boundaries*. At lower temperatures these domains are larger (less anti-phase boundaries). Just above the solvus, some short range ordering is seen, with preference for A-B bonds. At high temperatures, the solution remains random. With a higher percentage of B in the mixture, segregation of B along the anti-phase boundaries can be seen. More segregation is seen at lower temperatures. If the solution is ideal, with all interactions energetically equivalent, then the interaction parameter and the enthalpy of mixing will be zero. A random solid solution is formed, with no preference for any of the bond types. Web version of Monte Carlo simulation = <! function startMCSimul(b, media) { var i = b.form.interaction.options[b.form.interaction.selectedIndex].value; var c = b.form.composition.options[b.form.composition.selectedIndex].value; var t = b.form.temperature.options[b.form.temperature.selectedIndex].value; var fileName; if (i 'positive') { fileName = 'exsolution'; } else { fileName = 'ordering'; } fileName += t; if (c '40') { fileName += '\_4'; } else { fileName += '\_5'; } if (media 'movie') { window.location = '/tlplib/solid-solutions/media.php?type=video&file=' + fileName + '.mp4&caption=' + escape('Monte Carlo simulation with ' + i + ' interaction parameter, temperature = ' + t + ' K, composition = ' + c + '% A, ' + (100 - c) + '% B') + '&amp;width=415&amp;height=373&return=simul2'; } else { window.location.href = '/tlplib/solid-solutions/media.php?type=image&file=' + fileName + '.gif&caption=' + escape('Final state of simulation with ' + i + ' interaction parameter, temperature = ' + t + ' K, composition = ' + c + '% A, ' + (100 - c) + '% B') + '&amp;width=415&amp;height=373&return=simul2'; } } // > This page provides a limited web-based version of the Monte Carlo simulation in case you are not able to run it in IGOR Pro. There are a limited number of choices of interaction parameter, temperature and composition. Video clips generated by running the simulation in IGOR Pro show the progression from an initially disordered state to the state determined by the input parameters. The video clip files are 2 to 3 MB in size, so if you are on a slow Internet connection you may prefer to view an image of the final state reached instead. As described on the previous page, the A atoms can lie on two different types of site (alpha - yellow and beta - blue), while the B atoms are black. | | | | | - | - | - | | Interaction parameter:Positive Negative Composition:40% A, 60% B 50% A, 50% B Temperature:10 K 100 K 200 K 300 K 400 K 500 K 600 K 700 K 800 K | Initial disordered state of simulation Initial disordered state of simulation | | Summary = Solid solutions consist of a mixture of components that are completely miscible with one another, and hence are a single solid phase. This Teaching and Learning Package has discussed the concept of solid solutions in two-component systems. By taking an atomistic approach, the processes of exsolution and ordering can be described and even modelled (using a Monte Carlo simulation). The interaction parameter, a quantitative representation of the tendency for bonding in the mixture, has been shown to be a critical factor in determining the equilibrium state of the mixture. In other words, the miscibility of one component with the other is dependent on the underlying thermodynamics. Relating this TLP to the TLP, the presence of solid solutions in phase diagrams has been discussed. Achieving a solid solution has been seen to be advantageous in terms of strengthening and hardening (via precipitates). The scale of the precipitates possible from a supersaturated solid solution can be significantly smaller than that possible with solidification from a liquid. Owing to the difficulty in demonstrating phase separation in solids, a demonstration involving a mixture of cyclohexane and aniline (both in liquid form) has been used instead. The ability of this system to scatter light during the change between one and two phases allows the transition to be easily identified. Questions = 1. Which of the regions of this Fe-C phase diagram are solid solutions?![Phase diagram](images/image026.gif) | | | | | - | - | - | | | a | 1, 2, 6, 9 | | | b | 2, 6, 7, 8, 9, 10 | | | c | 2, 6, 9 | | | d | 2, 4, 6, 8, 9 | | | e | 4, 7, 8, 10 | | | f | 7, 10 | | | g | 1, 2, 6, 7, 9, 10 | 2. Will the following factors affect the extent of solid solubility? | | | | | | - | - | - | - | | Yes | No | a | Composition | | Yes | No | b | Temperature | | Yes | No | c | Atomic radii | | Yes | No | d | Crystallographic structure | | Yes | No | e | Electronegativity | | Yes | No | f | Valency | | Yes | No | g | Interaction parameter | 3. How might a supersaturated solid solution be formed, using the following phase diagram? α and β are solid solutions.![Phase diagram](images/image027.gif) | | | | | - | - | - | | | a | Quench from T1 to T2 with composition x | | | b | Cool slowly from T1 to T2 with composition x | | | c | Quench from T1 to T2 with composition y | | | d | Cool slowly from T1 to T2 with composition y | 4. Which of these will *not* explain the strengthening of a solid solution? | | | | | - | - | - | | | a | Lattice distortion | | | b | Order hardening | | | c | Stress fields | | | d | Interaction with dislocations | 5. Which of these is *not* a feature of exsolution in a solid solution? | | | | | - | - | - | | | a | Negative interaction parameter | | | b | Positive enthalpy of mixing | | | c | Formation of A rich and B rich regions | | | d | Increase in short range order | Going further = ### Books * A. Putnis, *Introduction to Mineral Sciences*, CUP 1992 ### Websites * A collection of phase diagrams, previously hosted by the Georgia Institute of Technology * A Flash interactive tutorial on phase diagrams based in the Department of Engineering at the University of Cambridge.
Aims On completion of this TLP you should: * Know how limited diffusion in the solid and liquid phases affects the distribution of solute in a solidified alloy, and be able to predict these effects in certain samples. * Understand the factors that control the shape of a solid growth front, and the resulting microstructure. Before you start You should understand phase diagrams, and how they can be used to predict the form of a solidified alloy. It would also be helpful to have an understanding of ( and ). You might want to look at the TLPs on , , and , Introduction Metals and alloys are almost always cast from a liquid at some point during the manufacturing process, as they are typically obtained from their ores in liquid form, or melted down from scrap. Also, alloying two or more elements is usually done in the liquid phase, because rapid diffusion is required to ensure a uniform composition. This means that an understanding of how metals and alloys solidify is essential to allow us to explain and control the properties of the solid that is formed. Solute Partitioning = The TLP explains the method for determining the concentrations and proportions of the phases formed in solidification of a binary , from the phase diagram, assuming that equilibrium can be achieved, and the most thermodynamically favourable phases formed. This method is known as the . Consider the case of a binary alloy, Al-5wt%Cu: ![Phase diagram of Cu - Al](images/cualimage.jpg) The phase diagram shows that the first solid formed, CS, at 650 °C, will be 0.5 wt%Cu, but the stable phase after total solidification is a complete with a concentration of 5 wt%Cu. Clearly there will need to be significant diffusion in the solid to allow the solute to distribute evenly, and for equilibrium to occur. This diffusion can be described mathematically by Ficks laws: \[J = - D\left( {\frac{{\partial C}}{{\partial x}}} \right)\] \[\left( {\frac{{\partial C}}{{\partial t}}} \right) = D\left( {\frac{{{\partial ^2}C}}{{\partial {x^2}}}} \right)\] The self diffusion coefficient in the solid phase, Ds, varies from about 10-10 to 10-14 m2 s-1 at the melting point, depending on the material in question, and also obeys Arrhenius equation for temperature dependency: \[D = {D\_0}\exp \left( {\frac{{ - Q}}{{RT}}} \right)\] Values of diffusivity in most liquid metals are significantly higher, of the order of 10-9 m2 s-1, but also obey Arrhenius law. The graph below shows how the self-diffusivity ( ) , D, varies with the undercooling below the melting temperature, Tm, in copper. ![Graph of D varying with undercooling for Cu](images/arrhenius.gif) The interdiffusivity ( ) in the alloy will behave in a similar way, so we can see that unless the alloy is cooled very slowly, the diffusivity will drop off rapidly before significant diffusion can occur into the solid, resulting in “coring” of grains, with lower solute concentration at the centre, where the first solid formed; and higher solute concentration at the edges. If the solute partitioning is extensive enough to raise the concentration of the liquid to the eutectic composition ( ) the remaining liquid will then freeze with that composition in a eutectic structure, as in the micrograph of an Al-5 wt% Cu (5% copper, and 95% aluminium, by weight) alloy shown below: The pale areas are dendrites with a structure based on Al, getting darker towards the edges as the Cu concentration increases. The dark areas are the eutectic that forms in between the dendrites as fine lamellae of Al and CuAl2. In order to obtain quantitative expressions for the way solute is distributed, we need to use a quantity known as the partition coefficient, k, given by: \[k = \frac{{{C\_S}}}{{{C\_L}}}\] This is the ratio of the concentrations at the , and , CL, at a given temperature. It determines the extent to which solute is ejected into the liquid during solidification. If the solidus and liquidus are straight lines, then k is independent of temperature. The calculation of k is shown in the Bi-Sn phase diagram below: ![Phase diagram for Bi - SN](images/BiSn.jpg) In most cases, the liquidus and solidus are not straight lines, but they are often close enough that we can assume that k is independent of temperature. Also, it is often the case that k is less than one, so that the solid forming is of a higher purity than the liquid. The Scheil Equation = Since the properties of an alloy can depend strongly on the concentration of solutes that it contains, being able to quantitatively predict the concentrations in the solid is desirable, but a mathematical description of solidification is very difficult in the general case. Solutions can be obtained when certain assumptions are made. For example, we can assume: * There is no diffusion the solid phase, DS = 0. This will hold if the characteristic diffusion distance is much smaller than the length of the sample, \(\sqrt {{D\_{\rm{S}}}t} \) << *L* * There is complete mixing in the liquid phase, giving a uniform concentration in the liquid. This may occur because of convection, or can be aided by mechanical mixing. Using these assumptions we can derive the Scheil equation, which describes the composition of the solid and liquid during solidification, as a function of the fraction solidified. Work through the animation to see how the conditions required to derive the Scheil equation are obtained: Equating the amount of solute in the shaded areas gives: ( CL - Cs ) df = ( 1 - f ) dC Solving this gives us the Scheil equation, for the profile of solute in the completely solidified bar: Cs = k C0 ( 1 - fs )k-1 where CS is the concentration of solute in the solid, at a fractional distance along the bar, fS, C0 is the initial concentration of the liquid, and *k* is the partition coefficient. To see a full derivation of the equation, . The Scheil equation predicts a profile with a concentration that tends towards infinity at the end of the bar, but clearly there will be an upper limit on the concentration of any solid forming. In the case where A and B form a complete solid solution across the composition range, the liquid will, at some point, reach a concentration of 100 %B and hence solidify as pure B. An alternative case arises in a binary eutectic system of A and B, where a solid eutectic structure will form when the concentration of the liquid reaches the eutectic concentration. Steady State Solidification = We can also consider another situation, where there is still no diffusion in the solid phase, but in the liquid phase there is now limited diffusion, rather than complete mixing. From Ficks second law, we have: \[\frac{{{\rm{d}}C}}{{{\rm{d}}t}} = {D\_{\rm{L}}}\frac{{{{\rm{d}}^2}C}}{{{\rm{d}}{x^2}}}\] also: \[\frac{{{\rm{d}}C}}{{{\rm{d}}t}} = \frac{{{\rm{d}}C}}{{{\rm{d}}x}}\frac{{{\rm{d}}x}}{{{\rm{d}}t}} = v\left( {\frac{{{\rm{d}}C}}{{{\rm{d}}x}}} \right)\] These combine to give: \[{D\_{\rm{L}}}\left( {\frac{{{{\rm{d}}^2}C}}{{{\rm{d}}{x^2}}}} \right) + v\left( {\frac{{{\rm{d}}C}}{{{\rm{d}}x}}} \right) = 0\] which can be evaluated with the appropriate boundary conditions to give us: \[C = {C\_0} + \frac{{{C\_0}\left( {1 - k} \right)}}{k}\exp \left( {\frac{x}{{{D\_{\rm{L}}}/v}}} \right)\] This equation describes the solute profile in the liquid ahead of the interface, in the steady state. For a full derivation, . In the following pages, x should, strictly speaking, be written everywhere as x (distance ahead of the moving front) This model describes the solute profile in the liquid ahead of the interface once the situation has reached a steady state. However, when solidifying a bar, for example; it will take a period of time for the solute ‘bow wave to build up ahead of the interface. During this *initial transient* solid will be forming with a concentration less than C0, (initially it will be k C0) until a steady state is achieved. The solid then advances along the length of the bar with the solute bow wave ahead of the interface. When the bow wave, with a characteristic length, DL / v, begins to run into the end of the bar, the solute ‘piles up giving a rapid increase in the solute concentration of the liquid. This *final transient* sees an increase in the concentration of the solid, and may result in the formation of some non-equilibrium eutectic at the end of the bar. The simulation below uses a numerical model to predict the solute profiles for the solidification of a 10 mm bar, with a bulk concentration of 10 %. The partition coefficient, k, the size of the solute bow wave; which is dictated by DL/v, and the eutectic composition for the system can all be changed using the scroll bars. Adjust the variables to see how they each affect the solute profile, and consider why the simulation responds as it does. Click ‘Run to see how the solute profile develops during this schematic movie of steady state solidification. Zone Refining = Zone refining is an industrial process which makes use of the fact that a material often solidifies with a higher purity than the surrounding liquid, in order to produce materials of very high purity. A bar is passed through a furnace that melts a small section of the bar. As the bar passes through the furnace, this section of liquid is passed along the length of the bar. The solid that forms as the section passes by is of a higher purity than the liquid, and the excess solute is partitioned into the liquid section. After one run, the solute profile will be similar to that after steady state solidification, as the limited size of the liquid section has the same effect as limiting the diffusion in the liquid. Subsequent runs will carry more of the solute to the end of the bar. After many cycles, almost all of the impurity will be concentrated at one end of the bar, which can then be removed, leaving a material of very high purity. Sketches of the solute profile of an initially homogeneous bar, which has been zone refined, are shown after one cycle (left) and many cycles (right). | | | | - | - | | Graph for zone refining - one cycle | Graph for zone refining - many cycles | This process is used in the production of silicon for use in the manufacture of microelectronic devices. The semi-conducting properties of silicon are highly dependent on the type and concentration of dopant atoms; so all impurities have to be removed before any dopants are added. ![Image of 2 wafers](images/wafers.jpg) This picture shows the polished (left) and unpolished surfaces of silicon wafers cut from a bar that has been purified by zone refining. Dendritic Growth The animation below shows how the temperature gradient in the liquid affects the morphology of the growth front in a pure metal: The animation referred to the driving force for solidification, which is greater for larger undercoolings. To see why, . When a dendritic structure forms, the dendrite arms grow parallel to the favourable growth directions, normally 〈 100 〉 in cubic metals. Grains which are orientated with the 〈 100 〉 direction close to the direction of heat flow will grow fastest and stifle the growth of other grains, leading to a columnar microstructure. To see more about how a microstructure develops in a casting, see the section of the TLP on . | | | | - | - | | | This (left) is an image of the 3D structure of dendrites in a cobalt-samarium-copper alloy, taken with a . | | This (right), taken with a reflected light microscope, shows the appearance of dendrites of a copper-tin alloy when observed as a 2D section through the 3D structure. | | Constitutional Undercooling = It is actually rare to have a negative temperature gradient ahead of the interface, yet it is observed that dendrites are very hard to avoid in practice. This occurs because most materials in use have significant levels of impurities. The following animation shows how dendrites form in a binary alloy: The situation can be analysed quantitatively to determine whether a dendritic growth front is likely. If we assume a steady state diffusion profile ahead of the interface (see page on ),  the concentration of solute at any distance, x, ahead of the interface is given by: \[C = {C\_0} + \frac{{{C\_0}\left( {1 - k} \right)}}{k}\exp \left( {\frac{{ - x}}{{{D\_{\rm{L}}}/v}}} \right) \;\;\;\;\;\;(1)\] The liquidus temperature, TL, on the phase diagram is dependent on the concentration of solute by the equation: \[{T\_{\rm{L}}} = {T\_{\rm{m}}} + C\frac{{\partial {T\_{\rm{L}}}}}{{\partial C}}\] where Tm is the melting temperature of the pure substance, and ∂T / ∂x is the gradient of the liquidus on the phase diagram, which is usually negative if k < 1. The graph of liquidus temperature against distance, has a profile like the one shown below: ![Graph of undercooling](images/undercoolinggraph.gif) There will be an undercooled region ahead of the interface, in which a planar interface is unstable, if the temperature gradient, ∂T / ∂x, ahead of the interface is less than the gradient of the liquidus temperature at the interface. To maintain a planar interface the relationship below must hold: \[\frac{{\partial T}}{{\partial x}} > {\left. {\frac{{\partial {T\_{\rm{L}}}}}{{\partial x}}} \right|\_{x = 0}}\] which, by the chain rule becomes: \[\frac{{\partial T}}{{\partial x}} > \frac{{\partial {T\_{\rm{L}}}}}{{\partial C}}{\left. {\frac{{\partial C}}{{\partial x}}} \right|\_{x = 0}}\] By differentiating (1) with respect to x, and setting x = 0, we get: \[{\left. {\frac{{\partial C}}{{\partial x}}} \right|\_{x = 0}} = - \frac{{{C\_0}}}{{{D\_{\rm{L}}}/v}}\frac{{\left( {1 - k} \right)}}{k}\] The critical gradient required to maintain a planar interface is given by: \[{\frac{{\partial T}}{{\partial x}}\_{{\rm{crit}}}} = - \frac{{{C\_0}}}{{{D\_{\rm{L}}}/v}}\frac{{\left( {1 - k} \right)}}{k}\frac{{\partial {T\_{\rm{L}}}}}{{\partial C}}\] For temperature gradients only slightly below the critical gradient, the planar interface will break down, but the trapping of solute partitioned between the primary dendrite arms prevents growth of secondary arms, and a cellular growth front evolves. The simulation below allows you to predict the morphology of the solid growth in various systems. The next simulation plots the liquidus temperature, in black, against distance, on the lower chart. This is calculated from the composition of the liquid, which is plotted on the upper chart. The simulation also plots a temperature profile in the liquid, in red. The gradient of this can be changed using the scrollbar labelled ∂T / ∂x. The upper limit of this gradient is 10000 K m-1, which is an approximate upper limit for gradients that can be practicably achieved. The critical gradient required to ensure a planar interface is plotted in green, and is also displayed in the box. The partition coefficient, k, the bulk concentration, C0, the diffusivity in the liquid DL, and the growth front velocity, v, can all be adjusted using the appropriate scrollbars. Try adjusting the variables to see how the critical gradient, and the liquidus temperature profile change. Use the appropriate equations to justify the behaviour of the simulation. You should note that in order to ensure a planar interface, the growth front velocities need to be quite low, of the order of 10s of microns per second. This is why it is often difficult to avoid dendritic growth in practical solidification. In dendritic growth, most of the solute is ejected between the dendrite arms. The structure of the dendrites serves to prevent mixing with the rest of the liquid, so that the solute partitioning described earlier occurs over the length scale of the secondary dendrite arm spacing. This is known as *microsegregation*. Summary = In most cases of solidification, the material is cooled too quickly to allow the equilibrium phases, predicted by the phase diagram, to form. This is because the material does not spend sufficient time at high temperatures where diffusion is faster. The non-equilibrium phases that result can be explained qualitatively by using the phase diagram to predict the composition of the first solid formed. When diffusion is not sufficient for the first solid formed to reach the equilibrium composition, this will result in an excess of solute in the last solid formed, which is often sufficient to cause the formation of a non-equilibrium eutectic. Quantitative analysis also allows us to predict the compositions of the solid and liquid as a material solidifies, if we assume complete mixing in the liquid and no diffusion in the solid (Scheil equation), or if we assume finite diffusion in the liquid and no diffusion in the solid (Steady state solidification). We can qualitatively understand the reasons why a planar solidification front breaks down into cells or dendrites, from the fact the solute is partitioned into the liquid at the interface, and any random protuberance will be in a liquid of lower solute concentration, therefore favouring its growth compared to the rest of the solid. Using the knowledge of the solute profile in the liquid, we can quantitatively predict when dendrites will form as a result of this constitutional undercooling, and also make predictions about the scale and morphology of the structure formed. Questions = ### Quick questions*You should be able to answer these questions without too much difficulty after studying this TLP. If not, then you should go through it again!*1. What might cause a material not to form the phases predicted in the phase diagram after solidification? | | | | | - | - | - | | | a | Holding the solid at a temperature close to the solidus temperature for a long time. | | | b | Adding heterogeneous nucleants. | | | c | Cooling slowly. | | | d | Cooling quickly. | 2. Which assumptions must you make for the Scheil equation to be valid? (You may select more than one answer) | | | | | - | - | - | | | a | Infinitely slow cooling | | | b | No diffusion in the solid | | | c | Constant growth velocity | | | d | Complete mixing in the liquid | 3. When solidifying a hypoeutectic alloy, during which stage of solidification might you form solid with a eutecticstructure? | | | | | - | - | - | | | a | Initial transient | | | b | Steady state | | | c | Final transient | | | d | At any stage | 4. Which of the following reduce the likelihood of dendrites forming in a binary alloy? (You may select more than one answer) | | | | | - | - | - | | | a | High solute concentration, *C0* | | | b | Low growth front velocity, *v* | | | c | Large temperature gradient in the liquid, | | | d | Small partition coefficient, *k* |### Deeper questions*The following questions require some thought and reaching the answer may require you to think beyond the contents of this TLP.*5. A bar of magnesium is contaminated with aluminium, at a concentration of 2 wt%. The maximum acceptable concentration is 1 wt%. It is to be purified by directional solidification with a planar growth front, and complete mixing in the liquid. Use the Al-Mg interactive phase diagram to estimate the fraction of the bar that will have an acceptable purity. 6. How would the solute profile of a bar, solidified with complete mixing in the liquid, be affected if some diffusion could occur in the solid? 7. An Al-5wt%Cu alloy undergoes steady state solidification, with a growth front velocity of 10 μm s-1. The diffusivity in the liquid is 10-8 m2 s-1. The equipment being used is capable of applying a temperature gradient of up to 104 K m-1, is it possible to ensure that the alloy will solidify with a planar growth front? * Use the equation: ![](images/q6a.gif)to determine the critical temperature gradient in the liquid. * The partition coefficient, *k*, and the liquidus gradient, ![](images/q6b.gif), can be calculated from the phase diagram. (You will need to approximate the liquidus as a straight line.)### Open-ended questions*The following questions are not provided with answers, but intended to provide food for thought and points for further discussion with other students and teachers.*8. The micrograph shows a Cu-30wt%Ni alloy that has been chill cast (cast in a mould that is kept much colder than the fusion temperature of the alloy).Look at the phase diagram, and rationalise the features present in the sample. What is the approximate length scale of the microsegregation in the sample?The grain structure is dendritic, with primary arm widths of ~20 mm. The formation of dendrites has been caused by constitutional undercooling, which would be unavoidable in chill casting because the high cooling rate means that the growth front velocity is very high. There is microsegregation, with nickel rich regions showing up pale, and regions lower in nickel appearing darker. The phase diagram shows that the system has a partition coefficient greater than one, so the first solid formed is nickel rich (more impurity), and the final solid to form, between the dendrites is low in nickel. The length scale of the microsegregation is approximately 5 mm.![Annotated image of micrograph 11 from the DoITPoMS libray](images/000011notes.jpg) Going further = ### Books D. A. Porter and K. E. Easterling, *Phase Transformations in Metals and Alloys*, Chapman and Hall, 1992. Chapter 4 covers solidification. Earlier chapters cover phase diagrams, diffusion, and microstructure. Later chapters cover solid-state transformations. #### Websites This website covers much of the same material as this TLP, but also has more content covering the kinetics of solidification, and the formation of a eutectic structure. This page covers dendritic solidification, and contains many images of dendrites. The page links to a selection of movies showing dendritic solidification.
Aims This TLP is designed to give you a good working knowledge of the stereographic projection and to enable you to identify and plot poles. This TLP will be looking solely at the stereographic projection of the cubic crystal system in order to keep it simple. Before you start This TLP assumes a good knowledge of the atomic structure of crystals, and the indexing of planes with Miller indices. It is also closely linked with the TLP on slip, which gives more information on the mechanisms of slip in single crystals. Therefore, it would be useful to read the following TLPs: , ,  and Introduction In crystal geometry, the most important aspect of the lattice is the angular relationship between various planes and other symmetry elements, not the relative translational position of planes. The importance of this is demonstrated by the expression of the crystal structure through the outside planes of a macrocrystal, for example as in the fluorite crystal shown here: ![Image of a crystal](images/Image_1.JPG) (It should be noted that this is not common. To grow crystals so that their external structure represents their internal structure requires a very controlled set of conditions. This makes it only the more important to understand angular relationships, as they cannot be seen simply by looking at a crystal.) We need to be able to describe these angular relationships in an easily understandable manner, and so we use the stereographic projection, which presents a 3D structure on the surface of a sphere. This can be extrapolated to a 2D structure, which allows direct measurement of angles between various rotational axes or normals to planes, and this is very useful. This has a long history, being first used by Neumann in 1823. Basic concept = The basic idea is to represent planes as points on some representative surface, which maintains the angular relationship of the points to each other. In the spherical projection, various structural features are expressed as points on a sphere. This sphere sits around the object being examined. Because lattice planes always maintain the same angular relationship to each other, planes can be represented by a plane normal. It is the plane normal which is used to produce the point on the sphere. This point is referred to as the ‘pole of the plane. [Click on the image and drag it, in order to observe the image from all angles. This can be done with several images in this TLP.]The intersection of a plane normal with the sphere of projection results in a point on the sphere which is referred to as a ‘pole. It is possible to represent multiple planes on a single sphere, by extending the plane normals of all the planes. These can be joined to see angular relationships, as below. The information can be stored as 3D spheres, but these are unwieldy as it would require you to carry around a sphere. It is a lot easier to use a 2D representation of the sphere. The 2D representation is generated by projecting the sphere onto a ‘projection plane, in this case to produce a sterographic projection. This is done by connecting the points on the sphere to some defined ‘projection point. The projection point is typically defined as one of the poles of the sphere, as shown here: An important aspect of these projections is the position of the projection plane. There are two positions commonly used, equatorial and above, shown here (click and drag image to rotate): The stereographic projection is then built up by connecting points on the sphere to projection points, and then noting where the connecting lines intersect the plane of projection. We can also project down the plane itself, as well as just its normal. The plane projects down as an arc on the projection plane, and is found at 90 degrees to its normal. Precisely what this means will be seen later. At this stage of the process, it is useful to define a feature of the projection: the ‘primitive circle. This is a circle on the projection plane, which is located where the sphere of projection intersects the projection plane. This defines a boundary around the stereographic projection. Projected points may fall inside or outside of the primitive circle, depending on which pole is used as a projection point, as shown here: Points may appear at a great distance outside the primitive circle, so typically, different projection points are used for features in different hemispheres. For example, in the above example, the South Pole would be used. This is represented by using circles and dots depending on where a point is projected from. A pole projected from the North Pole is represented by a circle and a pole projected from the South Pole is represented by a dot. In this TLP, we will only be considering poles in the northern hemisphere, so the stereograms will only show dots. Demonstration of projection = While we have seen the basis of projection for a very simple system, we may now more closely examine the production of a stereogram (another word for stereographic projection), by producing the stereogram for a cube. Look at a cube: The most obvious symmetry element is the four-fold rotational symmetry. This presents on the sphere as a projected rotational axis, which intersects with the sphere, and is then projected down onto the projection plane. It can easily be seen that the projected points of the rotational axes maintain the same symmetry and angular relationship on the projection as they do in 3D. This can be extended further by the addition of 3-fold rotational axes which project from the vertices of the cube. (If these are not immediately obvious to you, please see the above image of a cube, and observe it down the [111] direction.) To differentiate between the four-fold and three-fold rotational axes, we introduce a new notation, such that various types of symmetry elements, when projected onto the plane, are illustrated in different ways. These typically are: So adding in the diads that project through edges of the cube with the new notation The projection at this point is evidently getting rather complex. But the 2D representation is still easily understood. Here are all rotation axes on a single diagram. There are also a collection of planes of symmetry. These intersect with the sphere in an infinite number of places, and as such, present as curves on the plane of projection, as can be seen here. The curves here take the form of ‘great circles. This will be covered in more detail later, but for now, all you need to know is that planes which pass through the origin (i.e. the centre of the sphere) present as great circles, which intersect with the primitive circle at the opposite ends of a diameter of the projection. In the full stereogram, you will also see that axes of rotation which lie in planes of symmetry show on the great circle. So, assembling the entire stereogram, we see: As you can see, the stereogram holds a large amount of information in a method which can be easily interpreted when you understand the principles behind it. Important properties of the stereographic projection Preservation of angular truth: This is the main basis for use of the stereographic projection. The angle between poles of planes is the angle between those poles on the sphere. This is also the angle seen when the poles are projected down onto the projection plane. This has been seen in the case of the cube. However, the axis system of the stereographic projection is slightly more complicated, and will be investigated further when we look at the Wulff net. The other important property is that any plane projects onto the projection plane as either a circle or a straight line. However, we do not necessarily see the entire circle. For example, planes which pass through the origin, if projected from a single point, present as a circle which falls both inside and outside of the primitive (click and drag image to rotate). Typically, we would instead project some of the plane from both possible projection points. This leads to a ‘double arc. The circle produced here is called a ‘great circle. It will always pass through both ends of a diameter of the primitive circle. One special case of a great circle is the primitive circle, which we saw before. The other possibility is a ‘small circle. This appears when examining a plane which does not pass through the origin. This produces a circle on the projection plane which will not pass through opposite ends of a diagonal of the projection plane (click and drag image to rotate).These will be looked at in more detail later. The Wulff net = Having understood how the projection is constructed, we can now look at how it is examined using the Wulff Net. The net is a projection of a collection of great and small circles, which represent lines of latitude (small circles) and lines of longitude (great circles) on the sphere, as seen here: If we consider rather more of each, we can project them downwards as before using the principle of projection to construct a stereographic projection of planes which are two degrees apart. This creates the Wulff Net. ![diagram of Wulff net](images/Image_24.jpg) This can be used to place planes and poles of planes on a stereogram with great accuracy. It allows the maintenance of angular truth, while still being easy to draw. Use of the Wulff net in constructing a stereogram = There are a few important things to note about a stereogram. Any planes, whose poles lie upon a great circle, share a zone with any other plane whose pole is on that great circle. For example, in the cubic system, (100), (010), (100) and (010) all lie on the primitive circle. The primitive circle is a special case of a great circle. Therefore, if you are trying to plot the pole of a plane on a stereogram and you know which zone it lies in, the use of a Wulff net will enable you to draw it relatively straightforwardly When drawing stereographic projections, brackets are not used when defining poles representing the normals to planes. Hence, the normals to the planes (100), (010), (100 ) and (010), etc. are written as 100, 010, 100 and 010 . For cubic crystals, the normal to (*hkl*) planes is parallel to the vector [*hkl*], so that the pole representing the normal to the (*hkl*) set of planes is also the pole representing the vector [*hkl*]. While this is also true for particular directions of the plane normal to the (*hkl*) set of planes and vectors [*hkl*] for crystals of lower symmetry, such as the normal to the (010) planes of an orthorhombic crystal being parallel to [010], it is not generally true. The following animation goes through the basics of using a Wulff net for cubic crystals where the centre of the stereogram is 001. Plotting poles on the stereogram through use of the Wulff net = It is possible to make use of the Wulff net to find the angular relationships of various poles of planes. However, to find these angular relationships, we need to plot the poles onto a Wulff net. There are various ways of doing this, described below.     Identifying poles on a stereogram through use of the Wulff net To identify poles, find two great circles that intersect at the desired pole *hkl*. Find the two zone directions [*u*1*v*1*w*1] and [*u*2*v*2*w*2] of these two great circles by, in each case, identifying two poles lying in these zone directions, and then using the Weiss zone law condition to determine the two zone directions. The desired pole *hkl* is then the normal to the plane *hkl* which contains the directions [*u*1*v*1*w*1] and [*u*2*v*2*w*2]; this can also be determined from the Weiss zone law condition. In general, directions such as [*u*1*v*1*w*1] and [*u*2*v*2*w*2] are referred to the real space lattice, while normals to planes (*hkl*) are referred to the reciprocal lattice. However, since for cubic crystals, the normal to the plane *hkl* is parallel to the vector [*hkl*], the algebra required is equivalent to taking cross products to determine the zone directions [*u*1*v*1*w*1] and [*u*2*v*2*w*2], and then taking a cross product again to determine the desired pole *hkl*. So, for cubic crystals and stereograms of cubic crystals, we can drop the distinction between the real lattice and the reciprocal lattice. Therefore, for example, we can identify poles on stereograms of cubic crystals using vector addition, e.g.: Applications of the Stereographic Projection - Slip = Slip is an important deformation mechanism, and as such, it is important to understand it. Slip is typically defined in terms of systems, containing both a slip plane and a slip direction. The use of a Wulff net allows these to be found easily given the tensile axis, following the procedure shown here: Interactive Wulff net = It is often useful to ‘play with a Wulff net in order to better understand the angular relationships between planes. Therefore, below is provided a Wulff net that will plot any plane with indices up to a maximum of 6. Summary = The stereogram is a very useful tool in the studying of crystal structures, and the understanding of how those structures relate to planes within crystals. This allows interpretation of diffraction patterns and lattices, making them an important part of crystallography.. Questions = ### Deeper questions*The following questions require some thought and reaching the answer may require you to think beyond the contents of this TLP.*1. Using a Wulff net, plot the {100} poles on a stereogram aligned so that the paper is in the x-y plane. By measuring angles along great circles, with rotation of the Wulff net if necessary, plot 320, 323, 510 and 511. Find the angle between 323 and 511. 2. Plot the pole 141 by plotting two intersecting great circles on a Wulff net. 3. Plot the pole 211 by drawing small circles around 001, 111 and 110 on a Wulff net. 4. Identify all green poles on this diagram by use of vector addition and zonal relationships.![](images/Q4a.gif) Going further = ### Books A. Kelly and K.M. Knowles, *Crystallography and Crystal Defects*, 3rd Edition, Wiley, 2020. D. McKie and C. McKie, *Essentials of Crystallography*, Blackwell Science Publications, 1986. F.C. Phillips, *An Introduction to Crystallography,* 4th Edition, Oliver and Boyd, 1971.
Aims On completion of this TLP you should: * Be able to predict the stiffness of rubber from a simple picture of its molecular structure * Be able to use the stiffness to predict how rubber specimens of different shapes will deform * Understand why rubber is known as an "entropy spring" * Appreciate that the stiffness of rubber rises with increasing temperature, in contrast to all other materials * Understand the pressure-size relationship exhibited by a balloon Introduction Rubbers (or elastomers) are polymers whose properties are affected by cross-linking between the individual chains. They have a fairly low cross-link density, with links at random intervals, usually of between 500 and 1000 monomers. These few cross-links are sufficient to prevent the unrestricted flow of whole molecules past neighbouring ones, but the chain segments between cross-links are free to move (provided the temperature is above the glass transition temperature, Tg). In particular, the segments can uncoil and recoil. Statistical theory is able to provide mathematical relationships between the density of cross-links and measurable physical properties, such as the stiffness. These relationships can be used to predict the extension under a particular load, for example a balloon being inflated or a bungee jumper, or a measured property can be used to calculate the extent of cross-linking. It is commonly known that most things get bigger when heated. Probably the best known example of a material that does not always expand when heated is water, but this only contracts on heating as the temperature rises from 0°C to 4°C. Outside this range it behaves as any other material. The explanation for the anomalous behaviour lies in a re-arrangement of the molecular structure. The underlying tendency for inter-atomic bond lengths to increase with rising temperature, which is due to the asymmetrical shape of the energy-spacing relationship, is common to all materials.Although rubber under normal conditions expands like other materials as it is heated, when under tension it behaves differently, contracting in the loading direction, rather than expanding, as it is heated. The explanation of this behaviour lies in the crucial contribution of entropy to the elasticity of rubber, which will be covered later in this package. It will also become clear why the stiffness of rubber is so much lower than other materials. Basically, this is because rubbers deform elastically by uncoiling of long, convoluted molecules, rather than by stretching of individual inter-atomic bonds. Theory of rubber conformation = ### Polymer Coils Polymer molecules are made up of many smaller units called monomers. A rubber is a fully amorphous, lightly cross-linked polymer, above Tg. They are normally composed of a -C-C- backbone chain. The bond angle is fixed at 109.5°, but the torsion angle can change, allowing the macroscopic shape of the chain to vary from being linear to being highly coiled and convoluted. ![Diagram of polymer coiling](images/image02.gif)In this diagram, on the left each blue line represents a C-C link. The arrow shows the end-to-end distance of the chain segment, depicted as a thickened line. The segments tend to coil up to some extent, rather than aligning in a straight line. This can be thought of as the system increasing its entropy. The probability distribution for the end-to-end distance can be described mathematically by a Gaussian function: \[P\left( {{{\vec r}\_1},........,{{\vec r}\_N}} \right) = {\left\{ {\frac{3}{{2\pi {b^2}}}} \right\}^{3N/2}}\exp \left\{ { - \sum\limits\_{i = 1}^N {\frac{3}{{2{b^2}}}r\_i^2} } \right\}\] ### Tangles and Crosslinks A piece of rubber, such as a rubber ball or a rubber band, is made up of many polymer molecules. As the molecules prefer to be coiled to a certain degree, rather than stretched out, the polymer molecules easily get tangled together. When chains become entangled, their mobility decreases. Furthermore, the entanglements mean that the chains cannot stretch as far as otherwise they would be able to and so the stiffness of the rubber increases - at least if it is measured over short timescales, which do not permit the entanglements to slide. As well as physical entanglements, the chains can join together in another manner. If the chemistry of the chain is suitable, an atom belonging to one chain can form a chemical bond with an atom from another chain. This bond is called a cross-link. The nature of the cross-linking bonds is covalent. The cross-links inhibit the motion of the polymer chains and so increase the stiffness of the rubber. These are now stable over long time scales, so the stiffness is not time-dependent. ### Coiling and Uncoiling Consider what happens when you stretch a rubber band or a balloon. We know that the rubber will stretch a long way before it breaks, but we ought to be able to explain why it behaves the way it does. When you first put the rubber under tension, the polymer molecules will begin to change their conformation. Pulling on the chains makes the polymers uncoil. This is shown schematically below: ![Diagram of coiled chains](images/image03.gif) Unloaded coiled chains   ![Diagram of chains uncoiling under tension](images/image04.gif) Loaded in tension As you continue to pull on the rubber, the chain segments start to reach their limits of extensibility. In the case of silly putty or chewing gum, this sliding can continue until the chains no longer make contact and the rubber gets drawn out to a very thin cross section and perhaps fractures. For conventional cross-linked rubbers, on the other hand, the chain segments uncoil as far as they can before the cross-links inhibit further uncoiling. Further tension now pulls directly on the C-C bonds of the polymer backbone. When the force becomes great enough, the C-C bonds will break and the rubber will snap. The strength of the rubber is thus not very different from other materials, whereas the stiffness is lower by orders of magnitude. We should now be able to predict the shape of the extension vs force graph when extending rubber. This will be done for both uniaxial and biaxial tension later in the TLP. ### Effect of sun on rubber band Have you ever noticed what happens to a rubber band when it is left out in the sunshine for too long? The rubber becomes brittle and can break in your hand. The explanation for why this happens concerns cross-linking bonds. Ultra-violet light from the sun provides the polymer molecules with the activation energy they need to be able to form more cross-links with other chains. When the rubber band is left out for a long time, the density of cross-links increases. When you try and stretch the rubber band, the chains are prevented from uncoiling or sliding past each other, due to the large number of cross-links. Because of this you are effectively pulling on the C-C backbone bonds of the polymer, which are very stiff and will not stretch much. Instead the rubber band snaps with very little extension. Some oils and other chemicals have a similar effect on rubbers. However, butyl rubbers have a much lower density of available cross-link sites than other rubbers. Because of this it is much more difficult to form excess cross-link bonds and so butyl rubbers are resistant to degradation from U.V. light and from oils. Thermodynamics - the entropy spring = When a stress is applied to a sample of cross-linked rubber, equilibrium is established fairly rapidly. Once at equilibrium, the properties of the rubber can be described by thermodynamics. Consider an element of rubber placed under uniaxial tension. The first law of thermodynamics states that dU = dQ - dW where dU is the change in the system's internal energy, and dQ and dW are the heat and work exchanged between system and surroundings as the system undergoes differential change. We are going to look at the specific case of uniaxial tension. Work done is given by force multiplied by distance, so the work done by a uniaxial force f is given by dWf =  –f dL where dL is the differential change in the system's length due to the force f. (The negative sign implies that the work is done on the system). If the deformation process is assumed to occur reversibly (in a thermodynamic sense), then dQ = TdS where S is the system's entropy. Combining the above equations gives (for uniaxial tension with V and T constant) dU = TdS + f dL. From this, the tensile (retractive) force F = (dU/dL)T,V - T(dS/dL)T,V The first term on the RHS is the energy contribution to the tensile (retractive) force, or energy elasticity. In rubbers, this represents the storage of energy resulting from rotation about bonds and the straining of bond angles and lengths from equilibrium values. The second term on the RHS is the entropy contribution to the tensile (retractive) force, or entropy elasticity. It is caused by the decrease in entropy upon uncoiling of the chain segments. When rubber is extended, the change in length (and energy) comes almost entirely from a change in conformation, i.e. the rotation of bonds, and there is negligible stretching of the bonds. Therefore, at constant temperature, it can be approximated that the internal energy of the bonds does not change. dU = 0 F = -T(dS/dL) As the rubber is stretched, the chain is moving from a more probable (higher entropy) to a less probable (lower entropy) state. It is this lowering of entropy of the conformation that causes the retractive force, so rubber is described as an entropy spring. Entropy derivation p.MathJax\_Display { text-align: left !important; }It is possible to treat quantitatively the entropy change on extending a polymer chain: ![Diagram of polymer chain](images/image05.gif) If one end of the chain is at (0,0,0), then the probability of the other end being at point (x,y,z) is: \[p\left( {x,y,z} \right).dx.dy.dz = {\left( {\frac{b}{{\sqrt \pi }}} \right)^3}\exp \left[ { - {b^2}\left( {{x^2} + {y^2} + {z^2}} \right)} \right].dx.dy.dz\] where \[b = \sqrt {\frac{3}{{2n{a^2}}}} \] a = bond length n = number of links ![Graph of probability against distance](images/image06.gif) If we stretch the chain, so that the end is at a new location (x',y',z') such that (x'2 + y'2 + z'2) > (x2 + y2 + z2), then p(x,y,z) will decrease, leading to a decrease in entropy. The entropy is given by S = k lnΩ where Ω = total number of possible conformations leading to the same end position. Now Ω = p(x,y,z,) On stretching a chain, so that the initial end point (x,y,z) changes to (x',y',z') where x' = λxx y' = λyy z' = λzz the associated change in entropy is given by \[\Delta S = k\ln \left( {\frac{{\Omega '}}{\Omega }} \right) = k\ln \left( {\frac{{p'}}{p}} \right)\] ΔS = −kb2 [((x')2 − x2) + ((y')2 − y2) + ((z')2 −z2)] ΔS = −kb2 [(λx2 − 1) x2 + (λy2− 1)y2 + (λz2 − 1)z2] In the unstressed state, with overall length r, we expect no preferential direction, so: \[\left\langle {{x^2}} \right\rangle = \left\langle {{y^2}} \right\rangle = \left\langle {{x^2}} \right\rangle = \frac{{\left\langle {{r^2}} \right\rangle }}{3}\] So, on average: \[\Delta S = - k\left( {\frac{3}{{2n{a^2}}}} \right).\frac{{\left\langle {{r^2}} \right\rangle }}{3}.\left( {\lambda \_x^2 + \lambda \_y^2 + \lambda \_z^2 - 3} \right)\] From random walk theory, 〈 r2 〉 = na2 The entropy of a single chain segment can be multiplied by N (the no of chain segments) to give the total entropy change: \[\Delta {S\_{{\rm{tot}}}} = - \frac{1}{2}Nk\left( {\lambda \_x^2 + \lambda \_y^2 + \lambda \_z^2 - 3} \right)\] Contraction of rubber = As was shown earlier in this TLP, the entropy change of an elastomer when it is stretched is given by: \[\Delta S = - \frac{1}{2}Nk(\lambda \_1^2 + \lambda \_3^2 + \lambda \_3^2 - 3)\] This can be simplified when the shape change is uniaxial extension, when the extension ratios in the transverse directions must be equal: λ1 = λ2 since there is no volume change, λ1λ2λ3 = 1, and hence \[{\lambda \_1} = {\lambda \_2} = \frac{1}{{\sqrt {{\lambda \_3}} }}\] Therefore: \[\Delta S = - \frac{1}{2}Nk\left( {\frac{2}{{{\lambda \_3}}} + \lambda \_3^2 - 3} \right)\] From F = -T (dS/dL) we have \[F = \frac{{kTN}}{{{L\_0}}}\left( {{\lambda \_3} - \frac{1}{{\lambda \_3^2}}} \right)\] This leads to an expression for the nominal tensile stress: \[\sigma = \frac{F}{{{A\_0}}} = \frac{{kTN}}{{{V\_0}}}\left( {{\lambda \_3} - \frac{1}{{\lambda \_3^2}}} \right)\] Experimental data fits this theory well, except at high extensions when the chains reach maximum extension, causing the stiffness (gradient of plot) to increase greatly. ![Graph of nominal stress vs extension ratio](images/image07.gif) Graph of stress vs extension It is therefore possible to estimate the chain segment density of a rubber, if the extension under a particular load is known. An example of such a calculation is given below: ### Worked Example Calculation A mass of 0.2 kg is suspended from a piece of rubber. The rubber is initially 10 cm long and has a circular cross section, of radius 2 mm. When the system equilibrates after the mass is attached, the new length of the rubber is 20 cm. The experiment is done at 298 K. Calculate the chain segment density in the rubber. #### Solution The equation we need is: \[\sigma = \frac{F}{{{A\_0}}} = \frac{{kTN}}{{{V\_0}}}\left( {{\lambda \_3} - \frac{1}{{\lambda \_3^2}}} \right)\] The cross-link density is N/Vo. Rearranging the above equation gives us: \[\frac{N}{{{V\_0}}} = F/{A\_0}kT\left( {{\lambda \_3} - \frac{1}{{\lambda \_3^2}}} \right)\] We can now calculate the force and cross-sectional area: F = m g = 0.2 x 9.81 = 1.962 N A0 = πr2 = π x (0.002)2 = 1.26 x 10-5 m2 Putting these values into our starting equation gives: \[\frac{N}{{{V\_0}}} = \frac{{1.962{\rm{N}}}}{{1.26 \times {{10}^{ - 5}}{{\rm{m}}^2} \times 1.38 \times {{10}^{ - 23}}{\rm{J}}{{\rm{K}}^{ - 1}} \times 298{\rm{K}} \times \left( {\frac{{0.2}}{{0.1}} - {{\left( {\frac{{0.1}}{{0.2}}} \right)}^2}} \right)}} = 2.17 \times {10^{25}}{{\rm{m}}^{ - 3}}\] Therefore there are 2.17 × 1025 chain segments per m3. Contraction experiment The theory predicts that the stiffness of rubber is proportional to the temperature. \[\sigma = \frac{{kTN}}{{{V\_0}}}\left( {{\lambda \_3} - \frac{1}{{\lambda \_3^2}}} \right)\] The result of this is that, if the rubber is extended under a fixed load, it is likely to contract when it is heated (even after allowance is made for the thermal expansion). This can be observed using the following apparatus: | | | | - | - | | Diagram of apparatus Diagram of apparatus | Photograph of apparatus (Click on image to view larger version) | In the demonstration, a rubber strip is suspended inside a vertical Perspex tube, alongside a metre rule. The rubber strip is stretched by attaching a small load to the bottom end. The rubber is then heated using a hair dryer directed into the top end of the tube. A thermocouple is positioned inside the tube and connected to a digital meter that gives the temperature in degrees Celsius (which must be converted to K for use in the equations). In order to verify the theoretical explanation, you will need to make five observations from the demonstration: * initial unloaded length of rubber strip (L0) * loaded but unheated length of the rubber strip (L1) * initial temperature (T1) * loaded heated length of the rubber strip (L2) * final temperature (T2) Your browser does not support the video tag. Video of the contraction demonstration Verification The theoretical equation derived earlier relating extension and temperature was \[\sigma = \frac{F}{{{A\_0}}} = \frac{{kTN}}{{{V\_0}}}\left( {\lambda - \frac{1}{{{\lambda ^2}}}} \right)\] (The subscript has been dropped from λ since we are only considering one direction.) In the demonstration, F, A0, V0, N and k are all constant (once the load has been attached). Therefore, in order to verify this equation we need to show that \[\frac{{{T\_1}}}{{{T\_2}}} = \left[ {{\lambda \_2} - \frac{1}{{\lambda \_2^2}}/{\lambda \_1} - \frac{1}{{\lambda \_1^2}}} \right]\] where the subscripts 1 and 2 refer to before and after the contraction respectively, not different directions as in the earlier derivation. The following observations were made: * Before the experiment began it was noted that the top of the rubber band was 0.96 m from the base of the metre rule. * It was also noted that the length of the weights was 0.13 m. * The length of the rubber strip when unloaded, L0, was 0.205 m. These values will be used in our calculation to calculate the final length of the rubber strip. Using the recorded observations we have \[\frac{{{T\_1}}}{{{T\_2}}} = \frac{{296}}{{338}} = 0.876\] From the video of the readings before and after heating \({\lambda \_1}\) = \(\frac{{{L\_1}}}{{{L\_0}}}\) = \(\frac{{\left( {0.96 - 0.077 - 0.13} \right)}}{{\left( {0.205} \right)}}\) = 3.673, so \({\lambda \_1} - \frac{1}{{\lambda \_1^2}} = 3.599\) \({\lambda \_1}\) = \(\frac{{{L\_1}}}{{{L\_0}}}\) = \(\frac{{\left( {0.96 - 0.154 - 0.13} \right)}}{{\left( {0.205} \right)}}\) = 3.298, so \({\lambda \_2} - \frac{1}{{\lambda \_2^2}} = 3.206\) Therefore \[\left[ {{\lambda \_2} - \frac{1}{{\lambda \_2^2}}/{\lambda \_1} - \frac{1}{{\lambda \_1^2}}} \right] = \frac{{3.206}}{{3.599}} = 0.891\] So, to a close approximation, \[\left[ {{\lambda \_2} - \frac{1}{{\lambda \_2^2}}/{\lambda \_1} - \frac{1}{{\lambda \_1^2}}} \right] = \frac {{T\_1}}{{T\_2}} \] and the theoretical explanation is verified. The small discrepancy is attributed to conventional thermal expansion; rubbers have relatively high expansivities (~ 50 x 10-6 K-1), so a rise in *T* of about 50 K will increase the length of a strip which is initially 0.75 m long by 2 mm. Biaxial tension = The relationship between the entropy change and the extension can be simplified for biaxial tension of a sphere in much the same way as has already been seen for uniaxial tension. Consider a square piece of membrane, with initial unstretched side L0. For biaxial tension: λ1 = λ2, and λ1λ2λ3 = 1 therefore: \[{\lambda \_3} = \frac{1}{{\lambda \_1^2}} = \frac{1}{{\lambda \_2^2}} = \frac{1}{{{\lambda ^2}}}\] putting this into the equation relating entropy change to extension ratios: \[\Delta S = - \frac{1}{2}Nk\left( {\lambda \_1^2 + \lambda \_2^2 + \lambda \_3^2 - 3} \right)\] \[\Delta S = - \frac{1}{2}Nk\left( {2{\lambda ^2} + \frac{1}{{{\lambda ^4}}} - 3} \right)\] From this, the work done per unit volume on stretching is: \[w = \frac{1}{2}NkT\left( {2{\lambda ^2} + \frac{1}{{{\lambda ^4}}} - 3} \right) \;\;\;\;\;\;\;\;\;\;\; (1)\] The work done on the square membrane is then the work done per unit volume multiplied by the area of the piece of membrane, L02, and the thickness, t0: \[W = \frac{1}{2}NkT\left( {2{\lambda ^2} + \frac{1}{{{\lambda ^4}}} - 3} \right){t\_0}L\_0^2 \;\;\;\;\;\;\;\;\;\;\; (2)\] If we now make an incremental change to the extension ratio, δλ, the amount of work needed to make this incremental extension is: \[\delta W = 2F\left( {{L\_0}\delta \lambda } \right)\;\;\;\;\;\;\;\;\;\;\; (3)\] where F is the force and L0δλ is the change in extension. By re-arrangement of (3) it follows that: \[F = \frac{1}{2}\left( {\frac{{\delta W}}{{\delta \lambda }}} \right)\left( {\frac{1}{{{L\_0}}}} \right)\;\;\;\;\;\;\;\;\;\;\; (4)\] Therefore, from (4) and (2): \[F = NkT\left( {4\lambda - \frac{4}{{{\lambda ^5}}}} \right){t\_0}\frac{{{L\_0}}}{4}\;\;\;\;\;\;\;\;\;\;\; (5)\] Rearranging (5), the force per unit length is given by: \[\left( {\frac{F}{{\lambda {L\_0}}}} \right) = NkT\left( {1 - \frac{1}{{{\lambda ^6}}}} \right){t\_0} \;\;\;\;\;\;\;\;\;\;\; (6)\] The total force acting over a cross sectional area of a sphere is given by Pπ2, where P is the internal pressure: this must be equal to the force per unit length acting around the circumference \[NkT\left( {1 - \frac{1}{{{\lambda ^6}}}} \right){t\_0}2\pi r = P\pi {r^2}\;\;\;\;\;\;\;\;\;\;\; (7)\] from which, using the relationship that \(\lambda = \frac{r}{{{r\_0}}}\), we have: \[P = 2NkT\left( {{\lambda ^{ - 1}} - {\lambda ^{ - 7}}} \right)\left( {\frac{{{t\_0}}}{{{r\_0}}}} \right)\] which gives us the predicted variation of pressure with increasing *λ*: ![Graph of pressure vs extension ratio](images/image10.gif) Graph of pressure vs extension ratio This predicts that there will be a maximum value of pressure at an extension ratio of about 1.38, i.e. this is when the stiffness is a maximum. At high extension ratios (>2.5) however, the finite extensibility effect becomes apparent and the pressure becomes larger than predicted. Balloon experiment The biaxial theory can be tested using the apparatus shown. Apparatus for the balloon experiment (Click on image to view larger version) The data can also be used to estimate the volume density of chain segments, *N*. The balloon is not completely spherical, especially at smaller extensions, but the shape of the graph should be as predicted. ### Method Initially the balloon is completely filled, using a bicycle pump, while the bung is placed in the end of the U-tube, in order to make sure the coloured water does not spill out of the manometer. Once the balloon is fully inflated, the tap is closed, so that air cannot escape from the balloon, and the bung is removed in order to allow the trapped air out and the internal pressures to equalise. The balloon radius is estimated, using calipers to measure the diameter in three orthogonal directions. Also, the height difference between the two menisci in the manometer is measured. Once these two values have been measured, a small amount of air is released from the balloon by opening the tap. The balloon reduces in size and the measurements are repeated until the balloon is at atmospheric pressure. ### Analysis The height difference can be related to the pressure inside the balloon by the equation: P = ρ g h where: P = pressure inside the balloon ρ = density of the liquid (water = 1 g cm-3) g = acceleration due to gravity h = height difference To estimate the value of N, it is necessary to measure the initial thickness of the rubber, t0, and initial radius, r0. At Pmax, λ is approximately 1.38, so: \[{P\_{\max }} \approx 2NkT\left( {{{1.38}^{ - 1}} - {{1.38}^{ - 7}}} \right)\left( {\frac{{{t\_0}}}{{{r\_0}}}} \right)\] \[{P\_{\max }} \approx 1.24NkT\left( {\frac{{{t\_0}}}{{{r\_0}}}} \right)\] \[N \approx \frac{{{P\_{\max }}}}{{1.24kT}}.\frac{{{r\_0}}}{{{t\_0}}}\] ### Results The experiment was run three times, and the results are plotted on the following graph. ![Graph of height difference vs extension ratio](images/image11.gif) Below there is a series of photographs of the balloon and corresponding graphs. ... It can be seen that there is indeed a peak value of the pressure at an extension ratio of around 1.4. This "pressure barrier" to inflation at a low extension ratio is familiar to anyone who has tried to blow up a balloon by mouth. It may be noted that some stiffening at high extension ratio might result from strain-induced crystallisation, but the predominant effect, at least for moderate values of λ, is thought to be that of non-Gaussian statistics. Summary = In this teaching and learning package you have been familiarised with the following concepts: * Polymer chains in rubbers are coiled up in their equilibrium state. * When a rubber is stretched, this occurs by uncoiling of individual chain segments. Its stiffness is thus much lower than other materials, for which stretching occurs by lengthening of the inter-atomic bonds. * The retractive force exerted by a stretched piece of rubber arises from the tendency of individual chain segments to recoil back to their equilibrium shape, thus raising the entropy and reducing the free energy. It is thus possible to predict the stiffness of a rubber solely from a knowledge of its crosslink density (which dictates the chain segment length). You should also have read and understood the entropy spring derivation and be familiar with the idea that the retractive force a rubber exhibits under tension is caused by the lowering of the rubber's conformational entropy. You should have seen how rubber deforms under uniaxial tension, observed the effect of heating a strip of rubber under tension and be able to explain both. You should also have observed how a rubber membrane deforms under biaxial tension and be able to explain the three regimes Questions = ### Quick questions*You should be able to answer these questions without too much difficulty after studying this TLP. If not, then you should go through it again!*1. Materials expand when heated because: | | | | | - | - | - | | | a | The mean interatomic distance increases as the thermal energy of the atoms increases | | | b | The heat takes up extra volume forcing the material to expand | | | c | The rubber melts | | | d | There are more chemical bonds being formed | 2. When rubber is put under uniaxial tension it: | | | | | - | - | - | | | a | Contracts | | | b | Extends | | | c | Explodes | | | d | Rises | 3. Rubber differs from all other materials in that: | | | | | - | - | - | | | a | The density of cross-links is higher | | | b | It can be extended by more than 100% | | | c | Its stiffness increases with increasing temperature | | | d | It demonstrates visco-elastic behaviour | 4. Approximately how many monomers are there in between cross-links in rubber? | | | | | - | - | - | | | a | 5-10 | | | b | 50-100 | | | c | 500-1000 | | | d | 5000-10000 | 5. What is meant by a change of conformation: | | | | | - | - | - | | | a | Stretching of C-C bonds | | | b | Any change of torsion angle between polymer segments | | | c | Chains sliding over each other easily | | | d | Breaking of C-C bonds and therefore failure of the material | 6. If a load is suspended from a rubber strip at room temperature, and the temperature is then reduced by 20°C, the load will be: | | | | | - | - | - | | | a | Pulled upwards | | | b | Lowered | | | c | Released | | | d | Unaffected | 7. If a rubber strip is laid on a bench, unstretched at room temperature, and the temperature is then reduced by 20°C the strip will: | | | | | - | - | - | | | a | Contract | | | b | Expand | | | c | Stay The Same | | | d | Decompose |### Deeper questions*The following questions require some thought and reaching the answer may require you to think beyond the contents of this TLP.*8. A bungee jumper, Bill, whose mass is 82.5 kg prepares to do a bungee jump. The bridge from which he will jump is 130 m high. The jumper understands why rubber behaves the way it does under tension because he has read the DoITPoMS TLP "Stiffness of Rubber". He asks the instructor what the dimensions of the bungee cord are and is told it is 60 m long when not stretched and has an initial diameter of 10 cm. Immediately after his successful jump he hangs stationary 20 m from the ground waiting to be pulled up again. While dangling there he estimates the cross-link density of the rubber. What should his answer be? 9. Explain why the bungee jumping cord should be made from butyl rubber (especially if it is to be used in a sunny country). You should explain what might happen if butyl rubber is not used and why this occurs. 10. Explain how the stiffness of a rubber membrane changes when it is put under biaxial tension, paying special attention to the three separate regimes of behaviour and explaining how each one occurs. Why do rubber bands snap when over-stretched whereas Silly Putty flows and necks until it separates into two pieces?### Open-ended questions*The following questions are not provided with answers, but intended to provide food for thought and points for further discussion with other students and teachers.*11. Can rubber be used as a structural material? Give examples of cases where it would be suitable. 12. You should now understand the way rubber behaves in uniaxial tension and the effect of temperature on its behaviour. Using what you have learnt consider how a large block of rubber would deform under uniaxial compression. You should use what you know about the way polymer chains behave in your answer. 13. Look back over the balloon experiment, where the deformation of rubber under biaxial tension was demonstrated. What do you think would happen if the balloon had been heated in the same way the rubber strip was heated in the first experiment? Would the balloon have contracted? What would have been the effect of heating the air inside the balloon? Going further = ### Websites * "Everything you ever wanted to know about rubber (history, biographies, chemistry and conservation)", a site maintained by John Loadman. * "A Brief History of Rubber" (based on Wade Davis, One River 1996), a site maintained by Rhett A. Butler, promoting awareness of environmental issues surrounding rainforests. * "History of Natural Rubber", a site maintained by IRRDB, Kuala Lumpur, Malaysia.
Aims On completion of this TLP you should be able to: * Know the physical attributes that superconducting materials exhibit; * Understand the classes of superconductor that exist; * Have an elementary understanding of the mechanisms that give rise to the properties exhibited by superconducting materials; * Be familiar with some of the applications of superconductors. Before you start This TLP is designed to require minimal previous knowledge. However some knowledge of basic electronic structure and band structure might prove useful. Reading through the TLP on may prove useful before starting this one. Introduction The invention of a technique to liquefy helium by Heike Kamerlingh Onnes in 1908 provided scientists with a means of reaching an entirely new range of low temperatures in the vicinity of absolute zero. Helium liquefies at 4.2 K and using this newly accessible range of temperatures to investigate the electrical properties of materials, Onnes found an abrupt drop in electrical resistance where the resistance of the mercury wire he was examining became so low that he could not measure it. This was the first time that anyone had encountered the phenomenon of perfect conduction or “superconductivity”.  In addition, the same class of material was later found to expel magnetic fields.  The combination of perfect conduction and perfect magnetic field expulsion is what defines a superconductor. The often weird and strange world of quantum mechanics is generally considered to be confined to the atomic level, but with the introduction of superconductivity we actually find a state of matter which exhibits some of the bizarre quantum properties at a macroscopic level. This is what makes superconductivity such an exciting area of study and leads to some new and strange effects. Discovery and properties Electrical Resistance – the perfect conductor - Before the successful liquefaction of helium, scientists were unsure about the full temperature dependence of the electrical resistance of metals. It was known that in the region of room temperature, resistance dropped linearly with decreasing temperature. However, as the temperature was lowered this linear relationship failed and the reduction in resistance became smaller. Thus three possibilities were postulated In fact an entirely different dependence was discovered. Using mercury, which could be easily be made very pure, Onnes discovered that, instead of a smooth transition down to zero resistance that had been proposed, at about 4.2K the resistance of the wire suddenly dropped to below the accuracy of his instruments. The resistance had indeed disappeared and he had discovered a new state, which he named the superconducting state. The temperature at which the transition to superconductivity occurs is known as the critical temperature, Tc. In order to discover whether the resistance was in fact zero, or just very low, an experiment was designed to measure how long the current would flow in a ring where a current had been induced by a magnetic field. As any measurement of the current would inevitably alter it and introduce some resistance, the magnetic field produced by the flowing current was measured instead. It has been found that there is no reduction in current over the time period that anyone has had the patience to measure it (the record is over 2  years!). This proved that the resistance was indeed zero. Many other elements were subsequently found to exhibit a transition from normal to superconducting behaviour at some critical temperature and still more are found to be superconducting if high pressure is applied. Magnetic Properties – perfect diamagnetism In 1933 another breakthrough was made in the subject when Meissner and Ochsenfeld started to investigate the magnetic properties of materials as they transitioned from normal to superconducting behaviour. What they found was entirely unexpected and lead to the formulation of a theory that could explain superconductivity. They observed that when a superconducting material was placed within a magnetic field, the field was completely expelled from the interior of the sample. The ability of a material to partially expel a magnetic field was known already and is called . Almost all materials exhibit some degree of diamagnetism although the effect is usually tiny. In the case of superconductors the effect is large and unexpected. The phenomenon can be explained by considering a solid sphere of superconducting material. If a magnetic field is then applied, currents are induced in the surface of the sphere, which exactly oppose the applied field and cause no magnetic field to penetrate the sample. This phenomenon is shown to dramatic effect when a section of superconducting material is placed above a magnetic track. The field from the base is excluded from the superconductor and it levitates. If the superconductor is tapped sideways, it will travel around the track with virtually no resistance to its motion. The video below shows this happening. Until 1986 it was thought that superconducting behaviour was confined to certain materials at temperatures below ~30 K. A theory called “BCS theory” after its creators John Bardeen, Leon Cooper and Robert Schrieffer had been formulated to describe superconductivity.  This theory, for which its creators received the Nobel Prize in Physics in 1972, appeared to back this up but put a limit on the critical temperature of around 30 K. However, in 1986 a new class of ceramics were discovered to have critical temperatures far in excess of this, much to the amazement of the scientific community. Research into this family of ceramics quickly yielded materials with critical temperatures in excess of 77 K. This breakthrough meant that superconducting behaviour could be observed using liquid nitrogen temperatures instead of the far more expensive liquid helium temperatures that had been used previously. The graph below charts the development of superconducting materials. ![Timeline graph](images/timeline.jpg) Theory Although superconductivity was discovered as early as 1911, it was not until 1957 that Bardeen, Schrieffer and Cooper postulated a satisfactory explanation of the microscopic mechanism behind the effect. Two Fluid Model - One of the first models to be formulated to start to describe superconductivity was the **two fluid model**. This model proposed that electrons within a superconductor appear as two different types, normal and superconducting, i.e. some of the electrons behave as they would in a normal metal and obey Ohms law, while others are responsible for the superconducting nature of the material. The crucial aspect of this model is that below Tc, only a fraction of the total number of electrons are capable of carrying a supercurrent. As the temperature is lowered, more of the electrons become the superconducting type while fewer remain as normal electrons. In the two fluid model, normal and superconducting currents are assumed to flow in parallel when an electric field is applied. However, as the superconducting current flows with no resistance, it will carry the entire current induced by any electric field. At the time the model was proposed there was no direct evidence for the existence of the superconducting electrons but it did help to explain some of the puzzling experimental observations. London Conjecture - The next breakthrough in attempting to form an adequate theory came when two brothers, Fritz and Heinz London, made a useful connection between quantum mechanics and superconductivity. They correctly postulated that the diamagnetic properties of superconductors could be described by thinking of the material as a ‘giant atom with electrons orbiting around the edges producing the shielding currents responsible for the Meissner effect. This ‘giant atom could be produced by having all of the electrons in the body correlated in such a way that the entire specimen could be described by a single wavefunction. If electrons flow in circulating currents around the surface of a superconductor, they will set up a magnetic field which is equal in magnitude but opposite in direction to the external field applied. This will cause the applied field to be completely expelled and result in the Meissner effect. However, the exclusion of the field from the interior cannot take place exactly up to the surface. This would cause a discontinuous jump in the magnetic field which would require an infinitely large current density at the surface which cannot occur. Thus the magnetic field penetrates the material over a thin surface layer. The *London* *equations* quantitatively describe these screening currents and the magnetic field in the surface layer. In the simple one dimensional case of a plane perpendicular to the x axis and a magnetic field parallel to the z axis, the magnetic field decays exponentially over a characteristic length, λ. \[{B\_x}(z) = {B\_0}{e^{\frac{z}{\lambda }}}\]Detailed calculations using the London model show that λ is given by \[\lambda = \sqrt {\frac{{{m\_{\rm{s}}}}}{{{n\_{\rm{s}}}e\_{\rm{s}}^2{\mu \_0}}}} \] where *m*s, *n*s and *e*s are the mass, number density and charge of the supercurrent. This characteristic length, λ, is known as the **London** **penetration depth.** A small value of the penetration depth implies that the magnetic field is effectively expelled from the interior of a macroscopic sample. The number density of superconducting electrons is dependent on temperature and means that penetration depth is as well. According to the London model, the penetration depth rises asymptotically as the temperature approaches Tc. Thus the field penetrates further and further as the temperature approaches Tc and does so completely above Tc. ![London penetration](images/london_penetration.jpg) Cooper pairs In order for electrons to be able to move in some coherent manner and exhibit superconducting properties, there must be some type of interaction between them. Ordinarily, electrons repel each other due to the Coulombic interaction of the similar charge but for electrons to become coherent there must be some type of *attraction* between them. The breakthrough to describe how there could possibly be an attractive force between two electrons came as a result of experiments looking at the effect of nuclear mass on the critical temperature. Different isotopes of the same element were found to have different critical temperatures which led scientists to consider the fact that the underlying lattice must have some contribution to the superconducting effect. It was Leon Cooper who came up with the idea that vibrations within the lattice could indeed interact with electrons and cause there to be an attraction between them. The animation below shows the basic mechanism by which this attraction occurs. Often this pairing of electrons is visualised in terms of ball bearings (the “electrons”) resting on a rubber sheet (the “lattice”). Putting one ball bearing on the sheet will cause it to stretch creating a depression in which the ball sits. This lowers the gravitational potential energy of the ball by making it lower down. If another ball is placed on the sheet, it too will form a depression, but if it is placed near enough to the first the two will roll together and form a deeper depression. This lowers the overall gravitational potential energy of the two balls and causes there to be a coupling between them that would otherwise not be there without the rubber sheet. The animation below gives an idea of how this occurs.  In practice, this is only a schematic representation of the microscopics of the interaction within electron pairs. Your browser does not support the video tag. Video of the movement of particles into a depressionThis analogy can be taken further if we consider the balls to be moving. As the first electron moves it causes the lattice to distort and creates the depression in the rubber sheet. However, the motion of the ball and the relaxation of the rubber sheet occur on different time scales, with the ball moving much faster. This means that there is still a depression in the rubber sheet even after the ball has moved on. This allows the second ball to roll into the well and become effectively bound to the first ball. This is demonstrated by the next animation. Your browser does not support the video tag. Video of the movement of particles into and out of a depressionUp to now we have considered pairs to be correlated over a fairly short distance. In fact the mean separation at which pair correlation becomes effective is between 100 and 1000 nm. This distance is referred to as the **coherence length, ξCo,** of the Cooper pair. This coherence length is large compared with the mean separation between conduction electrons in a metal. Thus Cooper pairs overlap greatly. In between one pair there may be up to 107 other electrons which are themselves bound as pairs. BCS theory We are now almost in a position to be able to explain how type I superconductivity arises, but first we need to look at some quantum properties and how electrons are arranged in normal solids and how this differs when the transition is made to the superconducting state. We know that in normal electronic conduction the electrons that carry the current are scattered by impurities and lattice vibrations that interrupt their motion. In superconductors, however, the superconducting current is carried by Cooper pairs that have to be scattered as a single object without being broken apart. For the electrons which make up the pair to be scattered and produce an interaction we observe as electrical resistance, the Cooper pair must be split apart. This act requires an energy at least equal to the energy gap produced by the binding energy of the Cooper pairs. Due to random energy fluctuations, even at temperatures below Tc, there will sometimes be enough energy to break the pair and alter the momentum of the electrons. In order to stop the current, however, all of the pairs must be broken which would require a considerable combined effort. As the total energy of the system increases as the temperature is raised and approaches Tc, more and more pairs are broken as electrons are excited above the energy gap. At the transition temperature there are no Cooper pairs left. Theoreticians often consider the breaking of Cooper pairs as a creation of excitations which consist of electrons which were previously regarded as a bound pair. These “free electrons” are referred to as *quasi-particles*. At any temperature above 0 K there will be both bound pairs and quasi-particles present. This has striking similarities to the two fluid model which was proposed as a purely phenomenological model. Type I vs Type II = As scientists began to probe the exciting new properties of superconductors, they discovered that superconductors did have limitations apart from a maximum operating temperature. Although currents can flow without any energy dissipation, superconductivity is destroyed by the application of a sufficiently large magnetic field or if the flowing electrical current density exceeds a critical value. The critical magnetic field depends on how far below the critical temperature the material is. The graph below shows this dependence. As more superconducting materials were discovered, it was found that they fell into one of two classes, or “Types” with regard to their magnetic properties, and in particular in the way that they expelled magnetic fields. “Type I” superconductors have a sharp transition from the superconducting state where all magnetic flux is expelled to the normal state. Type II superconductors, on the other hand exhibit similar behaviour by completely excluding a magnetic field below a lower critical field value and becoming normal again at an upper critical field. However, when the magnetic field is between these lower and upper critical fields, the superconductor enters a “*mixed state*” where there is partial penetration of flux. In order to lower the overall magnetic energy, the material allows bundles of flux to penetrate the sample. Within these filaments, the magnetic field is high and the superconductor reverts to normal conducting behaviour. Around each of the filaments is a circulating vortex of screening current which opposes the field inside the core. This arrangement ensures that the material outside these bundles remains in the superconducting state. The graph below shows the differing dependence of magnetic field on temperature which characterises type II superconductors. The so-called flux vortices often arrange themselves into regular periodic structures. They can be visualised by covering the surface with a coagulation of very fine ferromagnetic particles. The animation below shows a micrograph taken of a type II superconductor in the mixed state and how it arises from the partial penetration of flux. The sample as a whole continues to have zero resistance as current flows by the easiest path and as there are superconducting regions, current can still flow without energy loss. It must be noted however, that if the vortices move they will dissipate energy. For the superconductor to remain lossless, the vortices must be pinned in place by defects within the crystal structure of the material.  Current research aims to understand and pin the resistive motion of flux vortices in applied superconductors, with the aim of creating high critical current materials for applications. The reason that some superconductors form a mixed state relies on the relationship between the coherence length, **ξ**, and the London penetration depth, λ. As well as being a description of the distance over which Cooper pairs can be considered to be correlated, the coherence length also describes the distance over which the superconductor can be represented by a wavefunction. For reasons we will not go into, if the penetration depth, λ, is greater than the coherence length, **ξ** it is thermodynamically favourable for the magnetic field to penetrate the specimen and it will be type II. This is shown schematically in the diagram below. | | | | | - | - | - | | Graph of Type 1 | | Graph of Type II | | Relationship between the coherence length and penetration depth for Type I and Type II superconductors |   Applications Power Transmission One of the most obvious applications of superconductors would appear to be to exploit its zero resistance in making current carrying wires to transport electrical energy. Currently overhead power transmission lines lose about 5% of their energy due to resistive heating. In relative terms this does not seem like a large amount but, due to the vast amount of power that is delivered, this equates to large wastage in real terms. Clearly this technology has not yet been realised as there are not superconducting wires transmitting power on a commercial scale. A commercially viable superconducting wire must have as high a critical temperature as possible as well as be able to handle significant current densities. However the mechanical properties of the material must also be considered when designing a wire to ensure it is resilient and flexible enough to be used as a replacement for conventional copper wires. This means that the high temperature ceramic superconductors that have recently been discovered are often not yet best suited for this purpose. In large scale applications, alloys of niobium and titanium tend to be used. These require liquid helium coolant which adds to the cost of running. The wires also have to be much larger than expected in order to avoid what are known as “quenches”. These occur if the wire momentarily stops being superconducting and returns to its normal state. Such a quench creates a region of high electrical resistance and rapidly dissipates a large amount of energy. This can lead to some part of the wire being vaporised, thus destroying the functionality of the wire. Currently the cost of manufacturing the superconducting wires as well as the cost incurred in maintaining liquid helium temperatures prohibits the use of superconductors in the transmission of power in a commercial environment. Magnetics - One of the most successful applications of superconductors is in the production of very large magnetic fields. Here superconducting wires are wound into a coil and a high electrical current is passed along the wire in order to produce very high field strengths. One of the most important applications which require a very high magnetic field is in Magnetic Resonance Imaging (MRI). This technique uses the high field to split the degenerate spin state of a hydrogen nucleus, which can then be investigated using electromagnetic radiation in the radio wave region. This allows the machine to image two dimensional cross sections which have hydrogen atoms in different chemical environments. As the body contains many hydrogen atoms present in water, different tissues in the body give different signals. These two dimensional slices can be built up to form complete pictures of the area of the body being imaged. Another area which exploits the high field strengths that can be achieved using superconducting magnets is in high energy physics. High strength magnets are key components in particle colliders used to probe the most basic constituents of matter. They are used to deflect high velocity charged particles to keep them in a circle and allow them to be constantly accelerated. High strength magnets are also used in fusion research to contain plasmas. This high temperature state of matter cannot be contained in conventional materials and must be levitated and enclosed using high strength magnets. Another application making use of superconductors to produce magnetic fields which is already in use is for magnetic levitation (maglev) trains. There are working examples of this technology in use in both Germany and Japan. The animation below outlines the two basic designs of maglev trains and describes how they work. Electronic Applications - Current microelectronics are beginning to be limited by the speed at which heat can be removed which is produced by the electronic circuits. In order to speed up computing power, interconnects between various components of the circuit can be shortened. However, this creates even more heating problems due to the higher current densities which have to be used. Superconducting wires could eventually be used to remove resistive heating and help solve this problem. Current silicon based technology is also limited by the speed at which transistors can switch between their 0 and 1 states. Superconducting junctions can be made by exploiting a phenomenon known as the Josephson effect (which is not covered in this TLP) which allows for much greater switching speeds and could greatly increase computer processing speeds. The Josephson effect is also exploited in making Superconducting Quantum Interference Devices (SQUIDs). These devices allow exceptionally small magnetic fields to be measured and are so sensitive that they can measure the tiny magnetic fields produced by the currents that flow along nerve impulses. This allows for a powerful new technique in neurological research and investigations of the brain. Summary = This TLP has covered the main aspects of superconductivity: * The discovery of the superconductive transition at low temperature by Onnes in 1908. * The properties that define superconductivity – zero resistance and perfect diamagnetism. * The idea that superconductivity can be explained by a macroscopic quantum state, where electrons move coherently and many can be described by a single wavefunction. * The London equations which provide a phenomenological model describing the magnetic field penetration within a superconductor. * The formation of Cooper pairs by the interaction of electrons with lattice vibrations. * The basics of BCS theory and the creation of an energy gap between filled and unfilled electronic energy levels. * The difference between type I and type II superconductors with respect to how they interact with external magnetic fields. * Applications of superconductors and their relevance to everyday life. Questions = ### Quick questions*You should be able to answer these questions without too much difficulty after studying this TLP. If not, then you should go through it again!*1. How does the resistance vary with temperature in a perfect superconductor? | | | | | - | - | - | | | a | The resistance falls continuously as the temperature approaches absolute zero. | | | b | The resistance initially falls but passes through a minimum before rising as the temperature approaches absolute zero. | | | c | The resistance initially falls before discontinuously dropping to zero at a temperature above absolute zero. | | | d | The resistance falls continuously down to a limiting value as the temperature approaches zero. | 2. Which property characterises superconductivity along with zero electrical resistance? | | | | | - | - | - | | | a | Perfect Ferromagnetism | | | b | Perfect Paramagnetism | | | c | Perfect Diamagnetism | | | d | The expulsion of any externally applied magnetic field. | 3. What carries the current in a superconductor once the electric field is removed? | | | | | - | - | - | | | a | Cooper pairs | | | b | Free electrons | | | c | Bound protons | 4. What causes electrons in a Cooper pair to be bound together in a conventional superconductor such as Nb or Sn ? | | | | | - | - | - | | | a | A coulombic interaction between the electrons | | | b | An interaction between electrons and the vibrations of the lattice | | | c | An interaction between electrons and the external magnetic field | 5. How does an externally applied magnetic field interact with a type II superconductor? | | | | | - | - | - | | | a | The applied magnetic field is fully expelled by the surface screening currents. | | | b | The applied magnetic field penetrates the material evenly | | | c | The applied magnetic field penetrates the material in 'bundles' which are surrounded by screening supercurrents. | Going further = ### Books * Superconductivity – The Next Revolution, *Gianfranco Vidali,* Cambridge University Press 1993 * Superconductivity, *Michael Tinkham,* New York: Gordon and Breach 1966 * Superconductivity: Fundamentals and Applications, *Werner Buckel,* Weinheim: VCH 1991 * Superconductivity of Metals and Cuprates, *J.R Waldram*, Cambridge University Press 1996 ### Websites * * * *
Aims On completion of this TLP you should understand: * how Deformation Twinning and Martensitic Transformations can generate a shape change * the phenomenon of Superelastic Deformation, which is a reversible Shape Change effected by Martensitic Transformations * the role of temperature, which affects the driving force for a Martensitic Transformation, and can also influence the ease of slip (dislocation motion) * the Shape Memory Effect, in which temperature changes are used (in alloys with certain characteristics) to promote reversible Martensitic Transformations and hence to control component shape Before you start It might be useful to understand how localised strain varies within a spring subjected to axial extension, which is covered in the .  However, the issue is relevant only to a small part of this TLP. It would be helpful to have a clear understanding of the basic concepts of thermodynamics. These can be found in the and the . Introduction Certain metallic alloys (and also some polymeric and ceramic materials) exhibit unusual behaviour when subjected to mechanical load and/or temperature change.  This is due to shape changes being generated by *Martensitic Phase Transformations*, rather than by conventional elastic (bond stretching) or plastic dislocation glide () deformation.  Something very similar happens during .  However, unlike the case of twinning, these phase transformations are *reversible*, at least under certain conditions, and hence the associated shape change can also be reversed.  This can lead to interesting and useful effects, such as a capacity to cycle a component between two different macroscopic shapes by cycling the temperature.  These alloys are commonly known as *Shape Memory Alloys* (SMAs). The most common of these alloys is an equi-atomic alloy of Ni and Ti, known as ‘Nitinol, although both Ni-rich and Ti-rich alloys are also used.  Other examples include Cu-Zn or ternary alloys like Ni-Cu-Ti or Ni-Hf-Ti.  They have been used for a variety of applications, including pipe couplings, earthquake dampeners, eyeglass frames, orthodontic wires, mobile phone antennas, micro-actuators, and a variety of biomedical devices.  Martensitic Phase Transformations - A Simple Example Martensitic transformations are diffusionless shear transformations. The transformation is most commonly driven by mechanical deformation or by a change in temperature. A martensitic transformation is an example of a displacive transition, in which there is cooperative motion of a relatively large number of atoms, each being displaced by only a small distance (a fraction of an interatomic spacing) relative to its neighbours. Click here to see the difference between . A simple example of a martensitic phase transformation is provided by the transition from cubic close-packed (ccp) to hexagonal close-packed (hcp).  This occurs by the systematic sliding of close-packed planes, ie {111} planes in ccp, over one another.  As it happens, the same kind of sliding (in the same direction and by the same distance) can also lead to (ie the same ccp crystal structure, but in a different orientation).  A twin is formed when every (111) plane undergoes this displacement, relative to the plane below it, whereas the hcp structure is created when every second (111) plane does this. Martensitic Phase Transformations - Basic Thermodynamics Some simple features of the thermodynamics relevant to phase transformations are given here - see . The quantity of primary concern here is the Gibbs free energy, G. The animation below shows the stability of two phases as a function of temperature. For superelastic and shape memory alloys, the two phases are normally termed “**austenite**” (stable at higher temperatures) and “**martensitic**” phase (stable at lower temperatures).  Sometimes, the austenitic phase is termed the “**parent**” phase. It is important to be clear that this terminology is generic – i.e., these phases are not any specific ones, but refer to a type of phase.  In particular, it is important to avoid any confusion with the austenite and martensite phases that form in steels.  As it happens, while the austenite-to-martensite transformation that occurs in the Fe-C system obviously is a martensitic transformation, it is crystallographically complex and exhibits certain rather special characteristics.  Moreover, for reasons that need not be detailed here, superelasticity and shape memory behaviour are **NOT** normally exhibited by steels. ***Footnote:** Although commonly presented in this form, it's not strictly correct for the free energy to be shown as decreasing with increasing temperature. It actually increases, despite the -TS term, because the enthalpy increases approximately linearly with temperature (with the proportionality constant being the specific heat of the material).  However, when comparing two phases (with different entropies, but similar specific heats) in this way, the changes in enthalpy are often neglected.  Both plots then decrease with increasing T (due to the -TS term), with the decrease being at a higher rate for the (more disordered) phase with higher entropy. (This is why highly disordered phases, such as liquids and gases, tend to be stable at higher temperatures.)* Martensitic Phase Transformations - Hysteresis Characteristics Ideally, leaving aside the possibility of phases forming with different compositions, the most stable phase (ie that with the lowest free energy) should be present at any given temperature.  However, in reality a phase may persist beyond the temperature range in which it is thermodynamically stable. This is because a driving force is often required in order to form the new phase within the existing phase.  For example, a common cause of this is a **nucleation barrier**, which is associated with the large interfacial energy contribution, per unit volume, for small transformed volumes. A further potential source of such behaviour, which applies only when both phases are solid, and is particularly important for shear transformations, is the **stored elastic strain energy** that arises when a region undergoes a shape change as it transforms.  Unlike the case of a nucleation barrier, the energy penalty associated with this elastic strain continues to rise as larger volumes of material transform.  The upshot of this is that pronounced hysteresis is commonly observed in the phase transformations that occur during thermal cycling of SMAs. * The austenitic phase start temperature, ***A*s**, which is the temperature where the martensitic phase begins to transform into the parent (austenitic) phase. * The austenitic phase end temperature, ***A*f**, which is the temperature where the martensitic phase has completely transformed in the austenitic phase. * The martensite start temperature, ***M*s**, which is the temperature where the austenitic phase starts to transform into the martensite phase. * The martensite end temperature, ***M*f**, which is the temperature where the austenitic phase has completely transformed into the martensite phase. These temperatures are not very well-defined, since they are dependent on experimental conditions (for example heating and cooling rates). Superelasticity - Strain Accommodation by Martensite Formation Superelasticity (SE), sometimes termed “pseudo-elasticity” or “pseudo-plasticity”, occurs without any change in temperature. SE takes place at temperatures above As - although usually only slightly above  -  where the austenitic phase is the more stable of the two thermodynamically, although not by very much.  When a mechanical strain is imposed, this can stimulate the transformation of austenite to martensite, sometimes termed “**stress-induced martensite**”.  The associated shear of local regions accommodates the imposed macroscopic shape change, while the lower strain energy component ensures that the overall free energy is now lower than it would be if the austenitic phase were still predominant. Relatively large strains (up to about 8%) can be accommodated in this way.  As the strain is increased, the proportion of the specimen that has transformed to martensite progressively rises.  This occurs without much increase needed in the applied stress, giving rise to a characteristic “superelastic plateau”. These strains are much higher than would normally be possible during conventional elastic deformation (up to ~0.5% for most metals).  Nevertheless, they are recoverable when the applied load is removed. When this is done, the material reverts to the austenitic phase. Since this tends to occur by the individual martensite crystallites shearing back to the austenite crystals from which they were formed, the original specimen shape is recovered.  However, this is only possible if all of the deformation (apart from that due to conventional elasticity) has been achieved by martensitic phase transformations.  If an excessive strain is imposed, then it is likely that some conventional plastic flow (dislocation glide) will occur, and of course this will be irreversible. Superelasticity - Hysteresis in the Stress-Strain Behaviour =Stress-strain plots of SE alloys exhibit pronounced hysteresis, since the reverse (martensite- austenite) transformation does not occur at the same stress levels during unloading as the forward transformation did during loading.  This is analogous to the hysteresis observed during thermal cycling, and occurs for the same reasons  -  ie extra driving force is required due to the stored elastic strain energy contribution.  **![Diagram of apparatus to investigate superelasticity of a Ni-Ti spring](images/SE_diagram.jpg)** Diagram of apparatus to investigate superelasticity of a Ni-Ti spring To investigate superelasticity a coil of Ni- 43at%Ti wire was held at a temperature above its *A*f temperature inside a perspex tube and was then loaded and unloaded with the associated extensions recorded. Superelastic behaviour can readily be illustrated using a specimen in the form of a spring.  This geometry is very convenient for such purposes, since it allows large macroscopic extensions to be generated, while the local strain (which is pure shear) remains relatively low.  The relationship between local and macroscopic strains, and the role of wire diameter and spring diameter, are fully explained .. This clearly shows the hysteresis expected. From the initial (linear) relationship between stress and strain , the Youngs Modulus of the austenitic phase can be obtained. The gradient gives the shear modulus, which can be converted to the Youngs Modulus using ![Graph of shear stress vs shear strain for loading and unloading](images/SE_experiment_all.png) Where G is the shear modulus, E is the Young modulus and ν is the Poisson ratio, for Nitinol *ν* ~ 0.3. ![Graph of shear stress vs shear strain](images/SE_experiment_linear.png) This leads to a value for *E* of about 60 GPa, which is in the range expected.  Shape Memory Effect - "Training" of the Transformation The shape memory effect also involves martensitic transformations, but in this case they are stimulated, not by imposed mechanical strain, but by changes in temperature.  It also involves the material being “**trained**” to have a preferred shape. This is done in the following way. The component is given a thermo-mechanical treatment, which involves holding at a high temperature (usually well above Af), followed by cooling (to below Mf), while mechanically constrained to have a particular shape  -  eg a spring.  Stress relaxation occurs during the holding period and then, during cooling (in the constrained shape), the austenite-martensite transformation takes place in such a way as to minimise the overall shape change.  When a portion of the austenitic lattice shears so as to form the martensitic phase, there are usually several alternative directions in which it can do this – forming what are often termed different “**variants**” – so its possible for groups of variants to be formed which, taken together, have a very similar shape to the original parent material.  This training predisposes the component to adopt the shape concerned when the phase transformation occurs, since this minimises the associated elastic strain energy. Once a component has been “trained” in this way, then, after it has been deformed in an arbitrary way, it can recover its “trained” shape just by reheating it to above its Af temperature Your browser does not support the video tag. Video of spring being distorted at room temperature, on heating it returns to its original shapeIts also possible, using slightly more complex thermo-mechanical treatments, to create components which exhibit a “Two-way shape memory effect”, such that they can be cycled between two pre-determined shapes by thermal cycling.  Such components are used in devices such as those designed to automatically open greenhouse windows in hot weather, and close them when it gets cooler. Shape Memory Effect - The "Ferris Wheel" Experiment =An illustration of a shape memory effect is provided by the “ferris wheel” set-up shown in the accompanying video and simulation. Ten “trained” nitinol springs comprise the periphery of the wheel, connecting ten thin steel spokes, via relatively massive brass weights attached at their ends.  The “trained“, shape of these springs, which is promoted by heating, is the contracted form.  As individual springs cool, on the other hand, the forces acting between them tend to stretch them out again. If the springs on one side of the wheel are heated, for example with a simple fan or radiant heater, then they tend to contract  -  ie to adopt the trained shape, as explained on the previous page.  This bends the adjacent spokes, so as to move the brass weights towards the side where the heater is located.  This creates a net moment tending to rotate the wheel.  Exactly how the wheel rotates depends on the heating and cooling characteristics, and also on the transformation characteristics of the SME and on the dimensions (wire and spring diameters) of the springs. Watch a video of the actual wheel and then see what happens to the individual springs when they are heated and cooled Your browser does not support the video tag. Video of wheel rotating on heating**Heating** Your browser does not support the video tag. Close up video of wheel springs heating**Cooling** Your browser does not support the video tag. Close up video of wheel springs cooling Microstructural Changes during Thermo-Mechanical Treatment The martensitic phase transformations taking place during Superelastic and Shape Memory behaviour cause characteristic changes in the microstructure. These are particularly striking if viewed dynamically, when the nature of the shear displacements taking place can often be seen very clearly. This is assisted by using viewing conditions such that the martensitic and austenitic phases are readily distinguishable. The two videos available here show: 1) a martensitic specimen being mechanically compressed, inducing two sequential changes of the orientation of the crystal by twinning. 2) a specimen being cooled, and then heated, inducing transformation to martensite and then reversion to the austenitic phase. In both cases, the specimens are being viewed by optical microscopy, using Nomarski differential interference contrast. The width of the viewed area is in both cases about 200 µm. Video 1: Your browser does not support the video tag. Video of a martensitic specimen being mechanically compressed, inducing two sequential changes of the orientation of the crystal by twinningA CuAlNi single crystal (2H orthorhombic phase ) is compressed (vertical axis) at room temperature, causing activation of two sequential twinning deformations. As austenite, the crystal is cube-shaped, whereas in the martensite form it is sheared. Six different sheared martensite crystals, having well - defined prism shapes (three of which appear in this video), can be created by pressing on 3 different faces of the cube . . It is essential that the loading arrangement allows lateral displacements to occur. Video 2: Your browser does not support the video tag. Video of a specimen being cooled, and then heated, inducing transformation to martensite and then reversion to the austenitic phaseA bi-crystal of austenitic CuAlNi is cooled, causing transformation to the martensitic (2H orthorhombic) phase. The process is reversed in the second half of the video, as the specimen is heated again. The rate at which transformation occurs is controlled by heat flow effects. (The shear process itself tends to take place very rapidly.) The martensitic phase is internally twinned. This is very clear within the dark-coloured phase moving in from the left-hand side in the first part of this video. These videos are made available by the courtesy of Prof. Vaclav Novak and Prof. Petr Sittner, from the Department of Functional Materials, in the Institute of Physics of the ASCR, Prague, Czech Republic. Further technical details are available in the following publication: V. Novak, P. Sittner, S. Ignacova, T. Cernoch, *Transformation behavior of prism-shaped shape memory alloy single crystals* , Mat Sci and Eng A , **438-440** (2006) p.755-762. Limits of Superelasticity =There are limits to the temperature and stress ranges within which superelastic deformation can occur.  The stress needed to initiate the austenite-martensite transformation rises with increasing temperature (ie as the austenite phase becomes thermodynamically more stable).  This dependence is predicted by a form of the . \[\frac{{{\rm{d}}\sigma }}{{{\rm{d}}{M\_{\rm{s}}}}} = \frac{{ - \Delta H}}{{T{\varepsilon \_0}}}\] where Δ*H* is the latent heat of the transformation and ε0 is the associated strain.  {this eqn. needs checking }  At temperatures well above the stress-free value of *M*s, quite substantial stresses may be needed to stimulate martensite formation.  Furthermore, the stress needed to induce dislocation motion (in the austenite phase) is likely to fall with increasing temperature. The temperature at which these two processes (slip and martensite formation) require the same applied stress will be the upper limit for both superelastic deformation and the shape memory effect (since slip will occur preferentially above this temperature).  Superelastic behaviour requires a minimum temperature of *A*f, since the specimen should be fully austenitic initially.  The shape memory effect not only requires heating to above *A*f, but also cooling down to *M*f Therefore SE occurs at temperatures between *M*s and *M*d where *M*d is the temperature at which slip becomes easier than the formation of the martensitic phase. Others limits - Its also possible for local defects to accumulate during repeated transformation, which can reduce the achievable strain and the force that can be exerted by the transformations.  Also, excessive deformation, beyond that which can be accommodated by transformation to martensite, will lead to irreversible strain (plastic deformation by slip). Applications The most widely used shape memory alloy is the equi-atomic Nickel Titanium alloy known commercially as Nitinol. Superelasticity - Superelastic stents are used to hold open arteries or other vessels. They can be tightly compressed while being guided into the body then, when released, they spring back to their larger shape. They are also used to hold together broken bones.  Conventional pins need to be tightened as bone heals, which either involves further operations or an external framework.  Superelastic devices, on the other hand, contract as the bone heals and provide guiding pressure, forcing the bones back to the correct shape. Other uses include spectacle frames and brassiere wires. If superelastic material is bent out of shape, it quickly returns to its original shape. Shape memory effect - Shape memory effects are used in actuators , to produce motion in response to temperature changes. A simple example is a greenhouse window, which automatically opens and closes in response to temperature changes. Other examples include the new Boeing 787 where small chevrons on the trailing edge of the engine move with varying temperature. On take off the engine is hotter and they move into a position which makes the engine run more quietly. However once away from the airport the engine cools in higher air and the flaps move to give better fuel economy. Another example is in clips used to hold solar panels in place on both the Hubble Space Telescope and the International Space Station. When cold they hold the panels closed up, but when out in space when the panels heat due to the sun the clips undo and allow the panels to unravel to their full size. These mechanical solutions are preferred to electrical systems because they are more reliable as there are fewer things to go wrong. Summary = An outline has been given of how displacive (shear) transformations can effect a macroscopic shape change. These transformations occur by the cooperative, systematic motion of all the atoms in the region concerned by small distances with respect to their neighbours. Unlike a similar shape change generated by conventional plasticity (dislocation glide), such transformations, and hence the shape change, can be reversed.  This can occur by simply removing the applied stress, giving rise to so-called Superelasticity.  Shape changes can also be stimulated by changing the temperature, thus altering the relative stability of the two phases.  This behaviour is exploited in the Shape Memory Effect, in which defined shape changes are induced by changing the temperature, either just recovering a prescribed shape after mechanical deformation or cycling between two defined shapes by thermal cycling. Questions = ### Quick questions*You should be able to answer these questions without too much difficulty after studying this TLP. If not, then you should go through it again!*1. Why is a martensitic transformation often termed "displacive"? | | | | | - | - | - | | | a | Because the specimen is moved to a different location when it occurs | | | b | Because the free surface of the specimen becomes displaced when it occurs | | | c | Because certain atoms take up the locations previously occupied by other atoms when it occurs | | | d | Because all of the atoms within the transforming region are systematically displaced when it occurs | 2. Which of the following statements are correct concerning displacive (martensitic) and diffusional phase transformations? | | | | | | - | - | - | - | | Yes | No | a | Diffusional transformations always take place more easily than displacive transformations | | Yes | No | b | The average velocity of the motion of individual atoms responsible for the transformation is higher for displacive transformations | | Yes | No | c | A displacive transformation can be reversed, whereas a diffusional transformation cannot | | Yes | No | d | Reversal of a displacive transformation may lead to recovery of the original specimen shape | | Yes | No | e | Displacive transformations are in general more likely to occur at lower temperatures, whereas diffusional transformations are favoured at higher temperatures | 3. Martensitic transformations often exhibit hysteresis - for example, the temperature must be taken considerably above that at which the two phases have the same free energies during heating, in order for the transformation to go to completion, whereas it needs to be cooled well below that temperature in order for it to fully reverse. Which of the following explanations for this effect is correct? | | | | | - | - | - | | | a | Since martensitic transformations normally involve a shape change, and both phases are solid, elastic strain energy is created when a local region transforms in this way, requiring the thermodynamic driving force to be further increased (via a change in temperature) in order for the transformation to continue. | | | b | Because martensitic transformations occur very quickly, extra driving force is needed to provide the necessary kinetic energy for atomic motion | | | c | Martensitic phases are always metastable, so there is no well-defined temperature at which they are expected to transform | | | d | Formation of martensite phases always requires some twinning to occur at the same time, and this requires an extra driving force | 4. Unlike loading and unloading of a specimen to and from its conventional elastic limit, doing this to a superelastic material, to and from its superelastic limit, leads to energy being (permanently) absorbed within the specimen, despite the fact that the original specimen shape has been recovered. Assuming "ideal" superelastic behaviour, which of the following could happen to this energy? | | | | | | - | - | - | - | | Yes | No | a | Stored within the specimen in the form of a different proportion of the phases from that of the starting material | | Yes | No | b | Stored within the specimen in the form of elastic strain energy in the regions within and around transformed phases | | Yes | No | c | Stored within the specimen in the form of extra dislocations | | Yes | No | d | Dissipated in the form of sound waves created when shear transformations occur | | Yes | No | e | Dissipated in the form of heat |### Deeper questions*The following questions require some thought and reaching the answer may require you to think beyond the contents of this TLP.*5. A component, to be made from a NiTi Shape Memory Alloy, must be superelastic under service conditions. Thermal cycling, while monitoring the phases present, gave the plot below. Thermodynamic calculations indicate that the stress level to stimulate martensite formation rises with temperature at 1 MPa K−1. The flow stress (for dislocation glide) is 100 MPa at 20 °C and falls with increasing temperature at 0.3 MPa K−1. Calculate the maximum use T.![](images/q5a.jpg)Dependence of phase proportion on temperature during (unloaded) thermal cycling Going further = ### Books There are relatively few books specifically on this topic, and most of these are compilations of chapters at the research level.  It is, therefore, currently rather difficult to identify a coherent, introductory level book dedicated to the topic, however, a good starting point may be "Smart Structures: Analysis and Design" by A. V. Srinivasan, D. Michael McFarland, CUP, 2000. ### Websites There are a number of websites giving various types of information.  A good starting point would be that of the Department of Functional Materials at the Czech Academy of Sciences, where Prof. Petr Sittner is based - see .  Various videos and illustrations of shape memory devices etc are available there.  
Aims On completion of this TLP you should be able to: * explain to someone with A-level physics how a TEM works and how it forms a) an image and b) a diffraction pattern; * name the essential components of a TEM and define the meaning and significance of important terms such as resolution, magnification, image contrast... * explain the operation of electromagnetic lenses in terms of ray diagrams, particularly in the illumination system and the objective/intermediate system; * explain to a materials undergraduate how different types of contrast arise from beam-specimen interactions, and how these can be used to study materials in the TEM. Before you start Before starting this TLP users should be familair with electron properties and magnetic fields, how optical lenses work and diffraction. Introduction Transmission electron microscopy is an immensely valuable and versatile technique for the characterisation of materials. It exploits the very small wavelengths of high-energy electrons to probe solids at the atomic scale. In addition, information about local structure (by imaging of defects such as dislocations), average structure (using diffraction to identify crystal class and lattice parameter) and chemical composition may be collected almost simultaneously. However, use of the microscope is highly skilled, and along with the interpretation of the information gained requires a good understanding of the processes occurring in the microscope, and the structure of materials. This TLP provides a solid basis for learning the theory behind the electron microscope and the concepts needed to begin learning to use one. TEM structure = The figure shows a typical TEM system. Click on the various sections to learn about what they do. Illumination: Electron Source = At the top of the TEM column is the electron gun, which is the source of electrons. The electrons are accelerated to high energies (typically 100-400 keV) and then focussed towards the sample by a set of condenser lenses and apertures. Source - The source is chosen so that the emitted current density per solid angle (brightness) is maximised. This is so that the maximum amount of information can be extracted from each feature of the sample. There are two major types of electron source, thermionic emitters and electron field emitter. Electron guns based on thermionic emission are cheaper and more robust, hence often found on older instruments. If enough thermal energy is added to a material its electrons may overcome the energy barrier of the work function and escape. To avoid the source melting, the material used must either have a very high melting point (such as W) or an exceptionally low work function (certain rare-earth boride crystals such as LaB6 are widely used). Another way of extracting electrons from a material is by applying a very large electric field. By drawing tungsten wire to a very fine point (<0.1 μm), application of a potential of 1 kV gives an electric field of 1010 V m-1 which is large enough to allow electrons to tunnel out of the sample. This is called electron field emission. Field emission guns are more expensive than thermionic electron guns, and must be used under ultra-high vacuum conditions. They are favourable for applications in which a high brightness and low energy-spread of incident electrons is needed (eg. high resolution TEM, electron energy loss spectroscopy) Illumination: Condenser System The shape of the beam of electrons emitted by the source can be approximated to a cone. Manipulation of the electron beam is the key to getting information from the sample. This is achieved using electromagnetic lenses. Here we shall see how the paths of electrons in the microscope can be modified by the lenses to focus the beam as required. The action of electron lenses can be described in the same way as light-optical lenses. The way of describing the function of a lens in an optical system is by means of a ray diagram, which is a slight abstraction based on the thin lens approximation. This geometric construction allows us to see the behaviour of different rays incident on a lens. Electromagnetic lenses in a TEM By using a small number of lenses in series we can achieve very high magnifications/demagnification very quickly, since these multiply. For example, three lenses each giving a magnification of 50× give a 503 = 125000× magnification when placed in series. Any magnification may be achieved in theory. However, beyond a limit any increase in magnification becomes meaningless, as the amount of information available is limited by resolution. A typical TEM uses a system of two condenser lenses to control the beam incident on the sample. The first lens demagnetises the source, either to increase the brightness or decrease the area of the specimen that is illuminate. A second lens with an aperture above it controls the convergence angle, \(\alpha\), of the beam at the specimen. It is possible to reduce the effects of spherical aberration dramatically through the use of a large number (as many as 50) of finely adjustable lenses acting in series, much like the lenses in a camera lens are arranged to reduce chromatic aberration. With the computing power available today it is possible to adjust the lenses simultaneously to find the optimum combination of strengths. This has made it possible to construct aberration-corrected microscopes with a resolution better than 0.1 nm (1 Å). Image formation = In the central section of the microscope the electron beam interacts with the specimen, and the transmitted electrons are gathered and focussed ready for further magnification of the desired images. Sample Stage - As the electrons are incident on the sample they may be scattered by several mechanisms. These scattering mechanisms will change the angle the electrons are moving at relative to the optic axis, and may be elastic (conserving energy) or inelastic (with energy tranferred to the sample and dissipated as heat). By analysing the changes to the electrons transmitted through the specimen we can gather information about the material, and in particular we can study its morphology, crystal structure, and composition. The sample itself is inserted into the path of the electrons, and for the best resolution must be extremely thin; a few nanometers. This is to maximise the number of transmitted electrons, and minimise multiple scattering events which make it more difficult to deduce information about the material. Once inside the microscope, the specimen sits right inside the objective lens and must therefore be small - typically less than 3 mm in diameter. It is necessary to align the specimen very accurately with the electron beam to achieve good imaging. Common specimen holders allow rotation about two horizontal axes, along with lateral movement. Other holders might include heating or cooling elements, or nano-indenters to deform the specimen as it is imaged. ![](images/tips.jpg) Specimen holders Objective/Intermediate Lens System - The objective lens takes electrons transmitted through the specimen and forms a diffraction pattern (in the back focal plane) and an image of the specimen (in the image plane). In the conventional TEM we have the option of magnifying either the image or the diffraction pattern formed by the objective lens. This is achieved by changing the settings of the intermediate lens from the imaging mode to the diffraction mode. The ease with which the microscopist can move between the two modes is one of the things which makes the TEM such a useful and versatile instrument. In imaging mode, the microscopist focuses the intermediate lens onto the image plane of the objective lens to produce a magnified version of the image further along the optic axis and on the viewing screen. To view a diffraction pattern, the intermediate lens is adjusted so that its object plane coincides with the back focal plane of the objective lens, where the first diffraction pattern is formed. The diffraction pattern is then displayed on the viewing screen. Viewing images After the electrons have passed through the specimen and been scattered to varying degrees, the information is converted into a macroscopic image. The simplest way of doing this is by simply magnifying the image or diffraction pattern until it is of the required size for analysis. This is the basis of conventional TEM. Alternatively, if a very fine beam of electrons is rastered across the sample, the amount of scattering from each point may be measured separately and successively, and an image gradually built up. This technique, requiring no lenses after the specimen, is called scanning TEM (STEM). TEM ### **Projector System** The projection system magnifies the images or diffraction patterns formed from the specimen, projecting them onto the viewing screen, where the electron density is converted into light-optical images for the microscopist to see. ### **Screen** Beneath all the lenses is a phosphorescent screen that glows when it is struck by electrons, displaying the image or diffraction pattern. The screen is viewed through a lead-glass window (to protect the users from X-rays generated in the microscope). ### **Image contrast** The information contained in a TEM micrograph is solely due to the difference in the flux of electrons through each point in the image - the contrast. The electron microscopist must understand the reasons for contrast in order to gather information from the sample. We shall deal briefly with the main sources of contrast in the following: * Mass absorption contrast + On passing through matter, a beam of electrons is gradually attenuated. The degree of attenuation increases with the thickness of the specimen and its mass, so variations of mass and thickness across the sample give rise to contrast in the image. * Diffraction contrast + Diffraction of electrons from Bragg planes causes a change in their direction of travel (elastic scattering). Hence, contrast can arise between adjacent grains or between different regions near the core of a dislocation. * Phase contrast + Scattering mechanisms often cause a change in the phase of the scattered electrons, as well as a change in direction. Interference between electrons of different phase which are incident on the same part of the image will cause a change in intensity and give rise to contrast. This is normally only visible at high magnifications and for microscopes that can achieve atomic resolution (HRTEMs). STEM Instead of recording the image from a sample all at once, we can illuminate a very small segment of the sample at one time and record the magnitude of electron scattering from the point. This can by done rapidly, and an image is built up in the same way as on a television screen by scanning the beam across the sample. This technique is called scanning transmission electron microscopy (STEM). Since the whole image is not collected and focussed at the same moment, no lenses are needed after the sample. Instead, a set of annular detectors is used. The spatial resolution of this technique is given by the size of the electron beam at the specimen surface (controlled by the gun and condenser system). An advantage in image formation is that electrons scattered through large angles (Rutherford scattering) may be detected using a high-angle annular dark-field (HAADF) detector and a fourth mechanism of contrast exploited. At large angles the intensity of scattering, *I*  ∝ Z x where x~2. STEM HAADF images display compositional contrast, and can be used to quantitatively assess elemental composition up to the atomic scale. Using a STEM in conjunction with analytical detectors it is possible to collect compositional maps of specimens, for example by energy dispersive X-ray spectroscopy (EDS) or electron energy loss spectroscopy (EELS). EM is used for high resolution chemical analysis of specimens. Image resolution The resolution of an image is the smallest distance between two points at which they may be distinguished as separate. The resolution of perfect optical lenses is limited by diffraction effects: the finite size of the lens(aperture) causes a modulation of transmitted light intensity collected on a viewing screen some distance away. The pattern of intensity, known as an Airy pattern, displays a strong central maximum (i.e. the Airy disk), surrounded by concentric minima and maxima. A similar effect can be expected for electron lenses in the TEM: the intensity transmitted by the objective lens will be affected by diffraction such that a point-like object in the specimen plane will produce an Airy disk in the image plane. Two point-like objects in the specimen will be distinguished as separate, if their distance \(r\_d \leq {0.61 \lambda \over \alpha}\), where \(\lambda\) is the wavelength of the electron beam and \(\alpha\) is the semi-angle subtended by the lens(aperture). This can be defined as the resolution of a perfect electron lens, based on the Rayleigh criterion. Electron lenses are not perfect. They suffer from astigmatism, as well as chromatic and spherical aberrations, which arise from the spread of electron velocities in the beam, their angular distribution, and their distance for the optic axis as they travel through the magnetic field generated by the lenses. Lens astigmatism is corrected by adjusting lens stigmators to compensate image distortions. The effect of chromatic aberrations is seen when electrons travelling at different velocities experience a different Lorentz force as they cross the lens, and are focused at different distances along the optic axis. This degrades the resolution of the image. The effect can be reduced substantially by using a FEG electron source with a small energy spread. It is important to note that the beam energy distribution always broadens when electrons interact with the specimen through inelastic collisions. Hence small chromatic distortions are unavoidable in TEM images. A lens is said to display spherical aberration when the field of the lens behaves differently for electrons travelling near the optic axis, and those travelling off-axis. The image resolution is degraded by \(r\_s = C\_s \alpha^3 \) , where \(C\_s\) is the spherical aberration coefficient (usually expressed in mm), and \(\alpha\) is, again, the semi-angle subtended by the lens(aperture). Spherical aberration may be reduced by forming images just with electrons that travel close to the optic axis, i.e. minimising \(\alpha\). As you can see in the animations this can be accomplished using a small aperture to exclude electron trajectories that cross the lens far from its centre. Reducing the aperture size reduces the beam current and increase the diffraction experienced by the beam. There is, therefore, an optimum aperture size for the greatest resolution. The optimum resolution can be expressed as: \(r\_{opt} = \lambda^{1/4}C\_s^{3/4}\). Conventional TEMs can achieve resolutions of 0.2 nm, and hence allow imaging of atomic lattices. Aberration corrected TEMs, where additional electron-optic components are introduced to compensate for spherical and chromatic aberrations, can achieve point resolutions below 0.1 nm (in phase contrast images). Summary = Through this TLP we have seen how a beam of electrons is generated, manipulated and detected in an electron microscope. We have explored the various components of the electron microscope, and seen how they work together to extract information from a sample on the nanometre scale. Finally, we can begin to appreciate the power and versatility of electron microscopy, and how it may be useful in the study of materials. Questions = ### Quick questions*You should be able to answer these questions without too much difficulty after studying this TLP. If not, then you should go through it again!*1. Explain why the image rotates when the strength of an electromagnetic lens is changed. 2. Which of these lens conditions gives the smallest convergence angle? | | | | | - | - | - | | | a | over-focus | | | b | in focus | | | c | under-focus | 3. If an object is placed 1 mm from a (convex) lens of focal length 0.25 mm, where will the image be located? 4. How can chromatic aberrations be minimised in a TEM? 5. Which imaging technique requires the smaller objective aperture? | | | | | - | - | - | | | a | diffraction contrast | | | b | phase contrast | 6. What is the minimum magnification needed to make visible the {111} planes in silicon? Going further = ### Books Goodhew, Humphreys and Beanland, *Electron Microscopy and Analysis* 3 rd Edition, Taylor and Francis 2001. Williams and Carter, *Transmission Electron Microscopy* Kluwer/Plenum Press, Second edition 2009
Aims On completion of this TLP you should: * Understand what a tensor is * Be familiar with some of the applications of tensors to Material Science * Understand the significance of the representation surface of a tensor * Be able to find the principal values and axes of a tensor * Be able to transform tensors from one frame to another * Be able to use the representation surface for a second rank matter tensor to describe and calculate material properties as a function of direction Before you start - The basics = This TLP requires a basic knowledge of vector and matrix algebra, including scalar products, matrix multiplication, 3x3 determinants and suffix notation. The TLP includes a section which revises these concepts; you can skip this section if you do not need to cover this material again. If you are unfamiliar with the concept of anisotropy in materials, you should follow the TLP on before starting this one. Similarly, you may need to read through the TLP to gain the understanding of crystal systems and symmetry elements which will be needed in order to follow the section on the effects of symmetry on tensors. Introduction Many physical phenomena of interest in materials science are naturally described by tensors including thermal, mechanical, electrical and magnetic properties. In isotropic materials, many properties (e.g. electrical conductivity) can be described by a single number, a scalar. However in a general crystalline solid, these properties can vary with the direction in which they are measured - and tensors are needed to describe them fully. This TLP offers an introduction to the mathematics of tensors rather than the intricacies of their applications. Its aims are to familiarise the learner with tensor notation, how they can be constructed and how they can be manipulated to give numerical answers to problems. Scalars, Vectors and Matrices = Before we can move on to tensors, we must first be familiar with scalars, vectors and matrices. If you are comfortable with these concepts, you can move on to the next page. Scalars - * These are direction independent quantities that can be fully described by a single number, and are unaffected by rotations or changes in co-ordinate system. Examples of physical properties that are scalars: Energy, Temperature, Mass. * For this TLP scalars will be written in *italics*. Vectors - * These are objects that possess a magnitude and a direction, and are referenced to a particular set of axes known as a basis. A basis is a set of unit vectors (vectors with a magnitude of 1) from which any other vector can be constructed by multiplication and addition. * The vector is referenced to the basis by its components. If possible, the maths is simplified by using an orthonormal base with orthogonal (mutually perpendicular) unit vectors. Examples of physical properties that are described by vectors: Mechanical force, Heat flow, Electric field. * Vectors will be written in **bold** and components of a vector, say x, will be written as xi ![image of matrix](images/matrix.gif)Matrices * A matrix is a mathematical object that contains a rectangular array of numbers that can be added and multiplied (according to matrix multiplication rules). They are very useful in many applications, for example in reducing a set of linear equations into a single equation, storing the coefficients of linear transformations (e.g. rotations), and as we shall see, in describing tensors. * The components of matrix **A** are written aij where i refers to the row element and j refers to the column element. ![Daigram of scalar products](images/scalarProduct.gif)Scalar products - * For two vectors: a = (a1, a2, a3) and b = (b1, b2, b3) The scalar product (also known as the dot product) is defined as: a.b = a1b1 + a2b2 + a3b3 and so, for example, the vectors (1, 4, −3) and (2, 5, 1) have a scalar product of 1×2 + 4×5 − 3×1 = 19. * The scalar product is related to θ, the angle between the two vectors, and can equivalently be written as: a.b = |a||b|cosθ. * For vectors of unit length, we can see that the scalar product is equal to the cosine of the angle between them. Matrix multiplication - If we have two matrices, | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | | **A** = | | | | | | | | | | | | | | | - | - | - | - | - | - | - | - | - | - | - | - | | | | | | | | - | - | - | | a11 | a12 | a13 | | a21 | a22 | a23 | | a31 | a32 | a33 | | | | and  **B** = | | | | | | | | | | | | | | | - | - | - | - | - | - | - | - | - | - | - | - | | | | | | | | - | - | - | | b11 | b12 | b13 | | b21 | b22 | b23 | | b31 | b32 | b33 | | | | Then the product **C** = **AB** is found by ∑3k=1 aikbkj where i, j and k are indices that represent the position of the element in the matrix. | | | | | | | | | | | | | | | | | | | | | | | | | | | | | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | | **C** = **AB** = | | | | | | | | | | | | | | | - | - | - | - | - | - | - | - | - | - | - | - | | | | | | | | - | - | - | | a11 | a12 | a13 | | a21 | a22 | a23 | | a31 | a32 | a33 | | | | | | | | | | | | | | | | | | - | - | - | - | - | - | - | - | - | - | - | - | | | | | | | | - | - | - | | b11 | b12 | b13 | | b21 | b22 | b23 | | b31 | b32 | b33 | | | | | | | | | | | | | | | | | | | | - | - | - | - | - | - | - | - | - | - | - | - | - | - | | = | | | | | | | | | | | | | | | - | - | - | - | - | - | - | - | - | - | - | - | | | | | | | | - | - | - | | a11b11 + a12b21 + a13b31 |  a11b12 + a12b22 + a13b32 |  a11b13 + a12b23 + a13b33 | | a21b11 + a22b21 + a23b31 |  a21b12 + a22b22 + a23b32 |  a21b13 + a22b23 + a23b33 | | a31b11 + a32b21 + a33b31 |  a31b12 + a32b22 + a33b32 |  a31b13 + a32b23 + a33b33 | | | | Don't bother trying to remember the above result, remember the rule: ROW × COLUMN = RC = "Race Car" or "Really Cool!" or make up your own acronym to remember it. This is also useful to remember the conventional order of suffices, where the first suffix indicates the row and the second indicates the column. You can use the following activity to practice more matrix multiplication. 3×3 determinants The determinant of a 3×3 matrix can be calculated along any row by 'expanding by minors'. For the matrix | | | | | | | | | | | | | | | | - | - | - | - | - | - | - | - | - | - | - | - | - | - | | **A** = | | | | | | | | | | | | | | | - | - | - | - | - | - | - | - | - | - | - | - | | | | | | | | - | - | - | | a11 | a12 | a13 | | a21 | a22 | a23 | | a31 | a32 | a33 | | | | The determinant is: | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | | |**A**| = | | | | | | | | | | | | | - | - | - | - | - | - | - | - | - | - | | | | | | | - | - | - | | a11 | a12 | a13 | | a21 | a22 | a23 | | a31 | a32 | a33 | | |  =  a11 | | | | | | | | - | - | - | - | - | | | | | | - | - | | a22 | a23 | | a32 | a33 | | |  - a12 | | | | | | | | - | - | - | - | - | | | | | | - | - | | a21 | a23 | | a31 | a33 | | |  + a13 | | | | | | | | - | - | - | - | - | | | | | | - | - | | a21 | a22 | | a31 | a32 | | |  =  a11(a22a33 - a23a32) - a12(a21a33 - a23a31) + a13(a21a32 - a22a31) The minor of a11 is the 2×2 determinant made up from the elements not in its row or column. The minus sign appears because expanding along the rows or columns follows the cofactor pattern: | | | | | | | | | | | | | | | - | - | - | - | - | - | - | - | - | - | - | - | - | | | | | | | | | | | | | | | | - | - | - | - | - | - | - | - | - | - | - | - | | | | | | | | - | - | - | | + | - | + | | - | + | - | | + | - | + | | | | For more of a reminder about determinants . What is a Tensor? = Tensors are simply mathematical objects that can be used to describe physical properties, just like scalars and vectors. In fact tensors are merely a generalisation of scalars and vectors; a scalar is a zero rank tensor, and a vector is a first rank tensor. The rank (or order) of a tensor is defined by the number of directions (and hence the dimensionality of the array) required to describe it. For example, properties that require one direction (first rank) can be fully described by a 3×1 column vector, and properties that require two directions (second rank tensors), can be described by 9 numbers, as a 3×3 matrix. As such, in general an nth rank tensor can be described by 3n coefficients. The need for second rank tensors comes when we need to consider more than one direction to describe one of these physical properties. A good example of this is if we need to describe the electrical conductivity of a general, anisotropic crystal. We know that in general for isotropic conductors that obey Ohm's law: j = σE Which means that the current density j is parallel to the applied electric field, E and that each component of j is linearly proportional to each component of E. (e.g. j1 = σE1). However in an anisotropic material, the current density induced will not necessarily be parallel to the applied electric field due to preferred directions of current flow within the crystal (a good example of this is in ). This means that in general each component of the current density vector can depend on all the components of the electric field: j1 = σ11E1 + σ12E2 + σ13E3 j2 = σ21E1 + σ22E2 + σ23E3 j3 = σ31E1 + σ32E2 + σ33E3 So in general, electrical conductivity is a second rank tensor and can be specified by 9 independent coefficients, which can be represented in a 3×3 matrix as shown below: | | | | | | | | | | | | | | | | - | - | - | - | - | - | - | - | - | - | - | - | - | - | | **σ** = | | | | | | | | | | | | | | | - | - | - | - | - | - | - | - | - | - | - | - | | | | | | | | - | - | - | | σ11 | σ12 | σ13 | | σ21 | σ22 | σ23 | | σ31 | σ32 | σ33 | | | | Other examples of second rank tensors include electric susceptibility, thermal conductivity, stress and strain. They typically relate a vector to another vector, or another second rank tensor to a scalar. Tensors of higher rank are required to fully describe properties that relate two second rank tensors (e.g. Stiffness (4th rank): stress and strain) or a second rank tensor and a vector (e.g. Piezoelectricity (3rd rank): stress and polarisation). To view these and more examples, and to investigate how changing the components of the tensors affect these properties, go through the flash program below. html { overflow: scroll; overflow-x: hidden; -ms-overflow-style: -ms-autohiding-scrollbar; -ms-overflow-style: none; /\* Internet Explorer 10+ \*/ scrollbar-width: none; /\* Firefox \*/ } ::-webkit-scrollbar { width: 0px; /\* Remove scrollbar space \*/ display: none; /\* Safari and Chrome \*/ } Tensor Usage Lots of physical quantities of interest can be described by tensors, and a small subset of the common ones is shown in the flash animation below. The tensors in the circles are those that can be applied and measured in any orientation with respect to the crystal (e.g. stress, electric field) and are known as **field tensors**. The tensors that link these properties are those that are intrinsic properties of the crystal and must conform to its symmetry (e.g. thermal conductivity), and are known as **matter tensors**. Many of these quantities are described by symmetrical tensors (e.g. stress, electrical susceptibility), for which the off diagonal components Tij and Tji are equal (i.e. T12 = T21). Taking electrical susceptibility as an example, this means that applying a field in the 1 direction, produces a polarisation in the 2 direction, and this is equal in magnitude to the polarisation produced in the 1 direction if the field is applied in the 2 direction. Whilst this seems intuitively reasonable, the explanation for it is not immediately obvious and the mathematical proof is in fact quite complex. Readers who would like to follow the detailed argument can refer to the textbook by Nye (Physical Properties of Crystals: see the page) Tensor Notation =Suffix notation - Suffices are used to represent components of tensors and vectors. For example in the case of a vector x = (x1 x2 x3) we can then refer to its jth component as xj. We can also refer to x as the vector xj where we know that j can take the values 1, 2 and 3 ( j is then known as a free suffix). It is important to note that tensors are defined with respect to a basis just like with vectors, and that the individual components of the tensor change when the basis is changed, while the magnitude and physical meaning stay the same. Note that there are different conventions for the order of the suffices. In this TLP we use the tensor component Tij to represent the effect on the i axis due to action on the j axis. Einstein summation convention Let us consider the equation: x.y = x1y1 + x2y2 + x3y3. This can be written as x.y = ∑3i=0 xiyi. Using the Einstein summation convention, we can drop the sigma and just write this as x.y = xiyi, remembering to sum over all the indices. Another example of this is the equation y = (a.b)x, which can be written using the summation convention as yi = ajbjxi where j is summed over (known as a dummy suffix) and the value of i can be 1, 2 or 3 (i.e. is a free suffix). Note that in effect this represents 3 separate equations, one for each vector component. If a suffix appears twice in a term it is a dummy suffix and is summed over, whereas free suffices appear once in every term. A more complex example is: (|a|2 - c.a)x + |b|2y = zφ can be rewritten as (ajaj − clal)xi + bkbkyi = ziφ. Second rank tensors have components in two directions. This leads to the components of the tensor **A** being written aij such that a tensor operating on a vector to give another vector y = **A**x can be written yi = aijxj where we see that the suffix j is summed over. This also applies for tensor multiplication, for which **C** = **AB** becomes cij = aikbkj where k is summed over. Making use of this convention is a useful simplifying technique in proving tensor and vector properties. Voigt Notation As we have seen, many physical quantities are described by symmetric tensors. Voigt notation (also known as matrix notation) is an alternative way of representing and simplifying these tensors. An example using a symmetrical second rank tensor (e.g. stress) is shown below: | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | | | | | | | | | | | | | | | | - | - | - | - | - | - | - | - | - | - | - | - | | | | | | | | - | - | - | | T11 | T12 | T13 | | T12 | T22 | T23 | | T13 | T23 | T33 | | | | = | | | | | | | | | | | | | | | - | - | - | - | - | - | - | - | - | - | - | - | | | | | | | | - | - | - | | T1 | T6 | T5 | | . | T2 | T4 | | . | . | T3 | | | | = | | | | | | | | | | | | - | - | - | - | - | - | - | - | - | | | | | | - | | T1 | | T2 | | T3 | | T4 | | T5 | | T6 | | | |#voigtTable{width:500px;border-collapse:collapse;} #voigtTable td, #voigtTable th{font-size:1em;text-align:center;background-color:#FFF0CC;border:1px solid #440000;padding:3px 7px 2px 7px;width:10%;} #voigtTable th{font-style:normal;padding-top:5px;padding-bottom:4px;background-color:#ee0000;color:#ffffff;text-align:left;width:40%;} #voigtTable tr.alt td {color:#000000;background-color:#fffbd0;} | | | | | | | | | - | - | - | - | - | - | - | | Tensor Notation | 11 | 22 | 33 | 23,32 | 13,31 | 12,21 | | Voigt Notation | 1 | 2 | 3 | 4 | 5 | 6 | These substitutions allow us to represent a symmetric second rank tensor as a 6-component vector. Likewise a third rank tensor can be represented as a 3×6 matrix (keeping the first suffix e.g. T123 = T14), and a fourth rank tensor as a 6×6 matrix (doing the operation on the first two and then the last two suffices e.g. T1322 = T52). This is very useful as we can display every tensor up to 4th rank as a single two-dimensional matrix, simplifying the maths and making them easier to visualise. It is particularly useful for the equations of elasticity where σij = Cijklεkl can be converted to σi = Cijεj: | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | | | | | | | | | | | | | - | - | - | - | - | - | - | - | - | | | | | | - | | σ1 | | σ2 | | σ3 | | σ4 | | σ5 | | σ6 | | | | = | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | | | | | | | | | | | - | - | - | - | - | - | | C11 | C12 | C13 | C14 | C15 | C16 | | C21 | C22 | C23 | C24 | C25 | C26 | | C31 | C32 | C33 | C34 | C35 | C36 | | C41 | C42 | C43 | C44 | C45 | C46 | | C51 | C52 | C53 | C54 | C55 | C56 | | C61 | C62 | C63 | C64 | C65 | C66 | | | | | | | | | | | | | | | - | - | - | - | - | - | - | - | - | | | | | | - | | ε1 | | ε2 | | ε3 | | ε4 | | ε5 | | ε6 | | | |It should be noted that for convenience some scaling factors are often introduced when converting tensors into Voigt notation. For example, by convention the off-diagonal (shear) components of the strain tensor **ε** are converted such that in Voigt notation they are equal to the engineering shear strain: | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | | | | | | | | | | | | | | | | - | - | - | - | - | - | - | - | - | - | - | - | | | | | | | | - | - | - | | ε11 | ε12 | ε13 | | ε21 | ε22 | ε23 | | ε31 | ε32 | ε33 | | | |  =  | | | | | | | | | | | | | | | - | - | - | - | - | - | - | - | - | - | - | - | | | | | | | | - | - | - | | ε1 | ½γ12 | ½γ13 | | ½γ21 | ε2 | ½γ23 | | ½γ31 | ½γ32 | ε3 | | | |  =  | | | | | | | | | | | | | | | - | - | - | - | - | - | - | - | - | - | - | - | | | | | | | | - | - | - | | ε1 | ½ε6 | ½ε5 | | . | ε2 | ½ε4 | | . | . | ε3 | | | |  =  | | | | | | | | | | | | - | - | - | - | - | - | - | - | - | | | | | | | | | | | - | - | - | - | - | - | | ε1 | ε2 | ε3 | ½ε4 | ½ε5 | ½ε6 | | | |As such, care must be taken when looking up numerical values and converting between notations to check that consistent definitions are used. Transformation of axes As with a vector, every tensor is described with respect to a basis, and if we choose a different basis or different orientation from which to look at the problem, the physical meaning is the same but the components of the tensor will change. Some orientations are easier to work in than others, due to the geometry of the problem or properties of the physical situation. We must learn how to move our problem from one frame into another. The transformation matrices which we require are pure rotations and are therefore given the symbol **R.** Transforming the basis - Let us consider the 2-dimensional simplification first: . ![Diagram of transforming the basis with basis vectors](images/rotation.gif) We are rotating from the basis with basis vectors x and y into a new basis with basis vectors x' and y'. The new basis can be written in terms of the old basis by resolving the vectors: x' = xcosθ + ysinθ and y' = −xsinθ + ycosθ, which can be written as a matrix as: | | | | | | | | | | | | | | | | | | | | | | | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | | | | | | | | | - | - | - | - | - | | | | | | - | | x' | | y' | | | | = | | | | | | | | | | - | - | - | - | - | - | - | | | | | | | - | - | | cosθ | sinθ | | -sinθ | cosθ | | | | | | | | | | | - | - | - | - | - | | | | | | - | | x | | y | | | |The components of this rotation matrix, **R,** are the cosines of the angles involved (known as direction cosines). The component rij is the cosine of the angle between xj (old basis) and xi' (new basis) i.e. the component of xj resolved along xi'. For example, in general we can say: x1' = r11x1 + r12x2 + r13x3 Since for unit vectors the scalar product is just the cosine of the angle between the two vectors, we can write: rij = xi'.xj Written in full we get: | | | | | | | | | | | | | | | | - | - | - | - | - | - | - | - | - | - | - | - | - | - | | **R** =  | | | | | | | | | | | | | | | - | - | - | - | - | - | - | - | - | - | - | - | | | | | | | | - | - | - | | x1'.x1 | x1'.x2 | x1'.x3 | | x2'.x1 | x2'.x2 | x2'.x3 | | x3'.x1 | x3'.x2 | x3'.x3 | | | | This is the transformation matrix to go from old to new basis. To go from new to old basis, it is easily seen that the matrix is the transpose of the one above. So rotating and then rotating back gives: **RRT** = **I** i.e. the original result. Therefore the inverse of the rotation matrix is its transpose. Transforming a vector - Consider the vector: a = a1x1 +  a2x2 +  a3x3 =  a1'x1' +  a2'x2' +  a3'x3' We know that the component of a resolved onto the new 1 axis is: a1' = a.x1' = a1r11 + a2r12 + a3r13 The other components can be similarly resolved, giving the above result. Therefore, to rotate a vector we use the following equation: a' = **R**a Transforming a second rank tensor - To derive the transformation law for a second rank tensors, let us consider the general tensor equation: p = **T**q (in the old basis), and p' = **T'**q' (in the new basis). To transform the vectors we use: p' = **R**p and q' = **R**q The above knowledge allows us to make some simple substitutions to see that: p' = **R**p = **RT**q = **RTR-1**q' = **RTRT**q', and also that p' = **T'**q' Hence: **T'** = **RTRT** In suffix notation, this can be written as the final transformation law: Tij' = rimrjnTmn, (or conversely) Tij = rmirnjTmn' Note again that these both represent 9 equations, one for each component of the tensor. Transforming an nth rank tensor - For an nth rank tensor the transformation law is as follows Tijk...'   = rimrjnrko...Tmno... where there are n transformation matrices. The transformation laws are useful as we can then give a mathematical definition of a tensor as 'an object whose coefficients transform according to the rules above.' This results in an object that retains its physical meaning whatever basis is used to describe it. This is an important concept, because transforming to a well-known basis usually simplifies the mathematics of a problem, as we will see in the next section. Principal axes As we have seen, a general second rank tensor has the form: | | | | | | | | | | | | | | | - | - | - | - | - | - | - | - | - | - | - | - | - | | | | | | | | | | | | | | | | - | - | - | - | - | - | - | - | - | - | - | - | | | | | | | | - | - | - | | T11 | T12 | T13 | | T21 | T22 | T23 | | T31 | T32 | T33 | | | | However, in a particular basis, this takes a simpler form: | | | | | | | | | | | | | | | - | - | - | - | - | - | - | - | - | - | - | - | - | | | | | | | | | | | | | | | | - | - | - | - | - | - | - | - | - | - | - | - | | | | | | | | - | - | - | | T11' | 0 | 0 | | 0 | T22' | 0 | | 0 | 0 | T33' | | | | i.e. All off diagonal elements are zero. These basis vectors are known as the principal axes (or directions) and the non-zero tensor components as the principal values. Note that in general the principal axes for a given property will not necessarily coincide with the crystal axes. Using principal axes simplifies the mathematics and highlights the symmetry of the situation. Considering once again the case of electrical conductivity, when working in an arbitrary basis the equations take the form: j1 = σ11E1 + σ12E2 + σ13E3 j2 = σ21E1 + σ22E2 + σ23E3 j3 = σ31E1 + σ32E2 + σ33E3 In the principal basis they take the form: j1 = σ11E1 j2 = σ22E2 j3 = σ33E3 i.e. The effect of an action along a principal axis is also directed along that axis (the conductivities along each principal axis are of course different from each other). Finding the principal axes As we have seen, in the principal basis the component equations become **T**x = λx where λ is a constant of proportionality. This represents 3 different linear equations where λ has 3 possible values (the principal values). T11x1 + T12x2 + T13x3 = λx1T21x2 + T22x2 + T23x3 = λx2 T31x3 + T32x2 + T33x3 = λx3 There is a useful solution for this when |**T** − λ**I**| =  0, i.e. when: | | | | | | | | | | | | | | - | - | - | - | - | - | - | - | - | - | - | - | | | | | | | | | | | | | | - | - | - | - | - | - | - | - | - | - | | | | | | | - | - | - | | T11 − λ | T12 | T13 | | T21 | T22 − λ | T23 | | T31 | T32 | T33 − λ | | | = 0 | This gives a cubic equation in λ called the secular equation. To find the principal values we must solve this equation for λ. For each of the three solution for λ we find the vector x that solves the equation above. Each of theses solutions for x is a vector parallel to one of the principal axes. This vector can be of any length so long as it points along the principal axis, so generally we scale the vector so that it is of unit length, giving us an orthonormal basis. It is worth noting that the principal values are called the eigenvalues of the matrix representing **T**, and the unit vectors along the principal axes are its eigenvectors. The general operation of finding these is not only useful when simplifying tensors, but is used throughout physics and chemistry for example in studying modes of vibration, and calculating energies in quantum mechanics. The activity below shows you how to find the secular equation and principal values of a symmetric second rank tensor. If we are working with a tensor where the one of the principal values is given, i.e. a tensor of the form: | | | | | | | | | | | | | | | | - | - | - | - | - | - | - | - | - | - | - | - | - | - | | **T** =  | | | | | | | | | | | | | | | - | - | - | - | - | - | - | - | - | - | - | - | | | | | | | | - | - | - | | T11 | T12 | 0 | | T21 | T22 | 0 | | 0 | 0 | T33 | | | | Then we can use the Mohr's circle construct to geometrically find the two unknown principal values. This is demonstrated further in the Theory of Metal Forming TLP. Transforming our tensor into the principal basis Using what we know about transformation matrices, i.e. that rij = xi'.xj, we can see that the transformation matrix to rotate from the old into the principal basis is simply the matrix of normalised eigenvectors (e1, e2 and e3). | | | | | | | | | | | | | | | | - | - | - | - | - | - | - | - | - | - | - | - | - | - | | **R** =  | | | | | | | | | | | | | | | - | - | - | - | - | - | - | - | - | - | - | - | | | | | | | | - | - | - | | | | | | | | | e1 | e2 | e3 | | | | | | | | | | | Below are two more flash programs to show you another example of finding the principal values and principal axes. The representation surface The representation surface (or representation quadric) is a geometrical representation of a second rank tensor and is useful for giving us a visual image of the tensor as well as being useful for example in calculating magnitudes of material properties described by second rank tensors. Let us consider the equation Tijxixj = 1. Here Tij represents a second rank tensor and xi and xj are coordinates. This can be written in full as: T11x12 + T12x1x2 + T13x1x3 + T21x2x1 + T22x22 + T23x2x3 + T31x3x1 + T32x3x2 + T33x32 = 1 which can be plotted to obtain a 3-dimensional graph. This graph is in fact a surface that is a complete description of **T**. If we want to transform the surface to a new basis, making the substitutions xi = rkixk' and xj = rljxl', we can write the equation of the representation surface as Tijrkixk'rljxl' = 1, or equivalently as Tkl'xk'xl' = 1. This means that: Tkl' = rklrijTij Therefore any transformations of the tensor result in identical transformations of the 3D plot. For symmetric tensors, the quadric equation for the representation surface simplifies to: T11x12 + T22x22 + T33x32 + 2T12x1x2 + 2T13x1x3 + 2T23x2x3 = 1, which is the equation of an ellipsoid. For non-symmetric tensors and those with negative principal values, the representation surfaces are more complex. As with the tensor itself, the representation surface has its simplest form when referred to the principal axes (when the basis vectors are in line with the radii of the ellipsoid), where the equation becomes: T1x12 + T2x22 + T3x32 = 1 (where T1, T2 and T3 are the principal values of the tensor). Magnitude of a property in a given direction We will often want to talk about the magnitude of a property in a particular direction. For example if we apply an electric field to graphite in the [143] direction and measure the current density in the same direction, we would like to be able to describe the result as a measurement of the conductivity of graphite in the [143] direction. Formally, the applied electric field E is described by a vector, as is the resultant current j. However conductivity requires a second rank matter tensor, and this means that these two vectors (E and j) will not in general be parallel to each other, and we will only measure the component of j which is parallel to E. For practical reasons it is therefore sensible to define the conductivity in a particular direction as the component of j that is parallel to E divided by the magnitude of E, i.e. j||/E. We can apply this definition to a general second rank matter tensor T. If a field q = q(l1 l2 l3) is applied then the response of the material will be p = Tq. Along the vector q the magnitude of T is given by: \[T = \frac{{{\bf{p}} \cdot {\bf{q}}}}{q} \times \frac{1}{q} = \frac{{{\bf{Tq}} \cdot {\bf{q}}}}{{{q^2}}} = \frac{{{T\_{ij}}{q\_i}{q\_j}}}{{{q^2}}} = {T\_{ij}}{l\_i}{l\_j}\]This can be simply related to the representation surface, as we know that the surface is decribed by the equation Tij xi xj = 1, where xi = r li and r is the radius. This gives us r2 Tij li lj = r2 T = 1 Hence the radius of the surface and the magnitude of the property it describes in a given direction are related by: T = 1 / r2 or r = 1 / √T The radius-normal property The radius-normal property of a representation surface gives us a geometrical method of finding the effect of a second rank tensor for a given action, for example finding the current density for a given electric field. The property states that for the tensor equation p = **T**q, if we draw the representation surface of T and take q from the origin, the vector normal to the surface where q meets it, is parallel to the direction of p. The size of p is given by the previously explained 'magnitude in a given direction' formulae, i.e. |p| = |q| / r2 where r is the radius to the point on the representation surface. The property will be demonstrated along the principal axes of T but it can be generalised to any basis. Let vector q = q(l1 l2 l3) where q is the magnitude and l1, l2 and l3 are the direction cosines of q. The point Q is the point on the representation such that OQ is parallel to q, so that OQ = r(l1 l2 l3). As the equation of the surface is T1 x12 + T2 x22 + T3 x32 = 1, the tangent plane at the point (a1 a2 a3) has equation T1 x1 a1 + T2 x2 a2 + T3 x3 a3 = 1. Hence the normal is n = (T1 a1  T2 a2  T3 a3)  = r (T1 l1  T2 l2  T3 l3). Now as q = q(l1 l2 l3) and we are working in the principal basis we find simply that p = q (T1 l1  T2 l2  T3 l3), showing us that p is parallel to n. These properties of the representation surface give us a simple way of finding the magnitude of property in a certain direction. The effects of crystal symmetry = Matter tensors abide by a fundamental postulate of crystal physics known as Neumann's Principle. This principle states that: *'the symmetry elements of any physical property of a crystal must include the symmetry elements of the point group of the crystal'* . As we know, the physical properties of crystals are described by tensors and the point group of a crystal is the set of its macroscopic symmetry elements such as rotation axes, mirror planes, and centres of symmetry. Taken with the 7 different crystal systems, the possible combinations of symmetry elements gives rise to the 32 crystal classes. This postulate essentially puts conditions on the form of matter tensors depending on the crystal symmetry - the tensors describing the matter property must be invariant under its symmetry operations. Effects of symmetry on first rank matter tensors * The vectors describing the matter property must be invariant under the symmetry operations. Straight away we see that any crystal with a *centre of inversion* cannot hold a first rank property since for a general vector p, ( p1, p2, p3 ) ≠ ( −p1, −p2, −p3 ). * We can also see with a little thought that if there is a *rotation axis* the vector property must lie along the rotation axis. An immediate consequence of this is that if the crystal structure has more than one rotation axis, then once again the crystal cannot possess the vector property since it cannot lie along two different rotation axes. * If the crystal includes a *mirror plane* , the vector must lie within the plane. If there is more than one mirror plane, the vector must lie in the intersection. * Finally if the crystal system contains a mirror plane and a rotation axes, the vector is non-zero only if the rotation axis is contained within the crystal plane. The crystal classes that can possess a first rank matter tensor property, along with the number of independent components and the form of the vector are shown below. | | | | | | - | - | - | - | | Crystal system | Crystal class | Number of independent components | Form of the vector | | Triclinic | 1 | 3 | (p1, p2, p3) | | Monoclinic (diad axis parallel to x2) | 2 | 1 | (0, p, 0) | | Monoclinic | m | 2 | (p1, 0, p3) | | Orthorhombic | mm2 | 1 | (0, 0, p) | | Tetragonal | 4 | 1 | (0, 0, p) | | Tetragonal | 4mm | 1 | (0, 0, p) | | Triagonal | 3 | 1 | (0, 0, p) | | Triagonal | 3m | 1 | (0, 0, p) | | Hexagonal | 6 | 1 | (0, 0, p) | | Hexagonal | 6mm | 1 | (0, 0, p) | Effects of symmetry on second rank tensors Straight away we see that properties relating to a second rank tensor are centrosymmetric since by inverting the vectors in the equation pi = Tijqj, the same *Tij* satisfy the equation. So although the crystal may not have a centre of inversion, the tensor property does. The best way to consider the conditions imposed by the crystal systems is to consider the representation surface and expressing its axes relative to the crystallographic axes. We shall consider rotations only as it can be demonstrated that mirror symmetries are covered by the rotation results. The general representation surface has 3 mutually perpendicular diads, three planes of symmetry perpendicular to the diad axes and is centrosymmetric. * Triclinic - Since there are no symmetry elements not possessed by the general representation surface, there are no restrictions on its components and so stays at 6 independent components. These components contain information on the magnitude of the principal values and the 3 angles required to define the orientation of the quadric axes to the crystallographic axes. * Monoclinic - A diad of the representation surface must be aligned with the diad of the crystal system. Apart from this the surface is free to take any orientation, and so its independent components contain information about the three principal values and the one angle required to orientate the 2 free axes to the crystallographic axes. * Orthorhombic - The crystal system contains 3 mutually perpendicular diads. On aligning the surface with the crystallographic axes we find only the principal value information is required. This also holds true for the *mmm* class. * Uniaxial systems (tetragonal, triagonal and hexagonal) - The only way for the representation surface to possess 3-, 4- or 6-fold rotation symmetry is to align a diad along the crystallographic direction and revolve around it. This results in only 2 independent components since 2 of the principal values must be equal. * Cubic - The four triad axes of the cubic system force the surface to become a sphere and so only a single component is required to define it. | | | | | - | - | - | | **Crystal system** | **Number of independent components** | **Form of the tensor** | | Triclinic | 6 | \[\left( {\begin{array}{\*{20}{c}} {{T\_{11}}}&{{T\_{12}}}&{{T\_{13}}}\\ {{T\_{21}}}&{{T\_{22}}}&{{T\_{23}}}\\ {{T\_{31}}}&{{T\_{32}}}&{{T\_{33}}} \end{array}} \right)\] | | Monoclinic (diad axis parallel to x2) | 4 | \[\left( {\begin{array}{\*{20}{c}} {{T\_{11}}}&0&{{T\_{13}}}\\ 0&{{T\_2}}&0\\ {{T\_{31}}}&{{T\_0}}&{{T\_{33}}} \end{array}} \right)\] | | Orthorhombic | 3 | \[\left( {\begin{array}{\*{20}{c}} {{T\_1}}&0&0\\ 0&{{T\_2}}&0\\ 0&0&{{T\_3}} \end{array}} \right)\] | | Tetragonal, Triagonal, Hexagonal | 2 | \[\left( {\begin{array}{\*{20}{c}} {{T\_1}}&0&0\\ 0&{{T\_1}}&0\\ 0&0&{{T\_3}} \end{array}} \right)\] | | Cubic | 1 | \[\left( {\begin{array}{\*{20}{c}} T&0&0\\ 0&T&0\\ 0&0&T \end{array}} \right)\] | Summary = * Tensors can be used in a wide variety of Material Science fields, including but not limited to stress and strain, temperature and entropy, electricity and magnetism. * A tensor is a set of coefficients which transform from one basis to another according to the transformation law: Tijk...'   = rimrjnrko...Tmno... * The representation surface of a second rank symmetric tensor is an ellipsoid constructed from the equation: Tijxixj = 1. * The principal values, λ, of a tensor can be found by solving the equation |**T** − λ**I**| =  0 and the principal axes by finding the vector such that (**T** − λ**I**) x= 0. * We can impose conditions on the components of matter tensors as they must adhere to the symmetry of the crystal class. Questions = ### Quick questions*You should be able to answer these questions without too much difficulty after studying this TLP. If not, then you should go through it again!*1. Let **a** = (2,3,5) and **b** = (-1, 2, -4). Calculate ![Equation](../tensors/images/q1a.gif)(the scalar product) 2. Let **a** = (2,3,5) and **b** = (-1, 2, -4). Calculate![Equation](../tensors/images/q2a.gif) the tensor product). 3. Let![](../tensors/images/q3a.gif), ![](../tensors/images/q3b.gif) and ![](../tensors/images/q3c.gif). Find the matrix products **AB, BA, AC, CAB** and the determinants |**A**|, |**AB**| and |**BA**|. 4. What is the transformation matrix for a rotation through angle θ about Ox2? 5. A vector is first rotated through angle θ about Ox2 and then through φ about Ox1. What is the combined transform matrix? 6. Show that a pure shear stress field ![Equation](../tensors/images/q5a.gif) can be represented as a pure normal stress field by rotating through 45° about the vertical axis. 7. The conductivity tensor of a crystal is found to be ![Equation](../tensors/images/q6a.gif). Show that the crystal does not conduct in one direction and find this direction relative to the lab basis. 8. For the stress state with normal stresses σxx = 50 GPa, σyy = -70 GPa, σzz = 20 GPa and shear stresses σxy = 30 GPa, σxz = 45 GPa and σyz = 60 GPa, find the normal stress on the (263) plane in a cubic system. 9. Graphite has a layered hexagonal structure with cell dimensions a = 0.246 nm and c = 0.679 nm and has electrical conductivities parallel and perpendicular to the planes of σparallel = 1.02 × 105 S m-1 and σperpendicular = 0.24 × 105 S m-1 respectively. A sample is mounted such that an electric field is set up along the [112] direction. The current density is measured parallel to the electric field. By first constructing the conductivity tensor and the direction cosine vector, find the expected current density in this direction for a 100 V m-1 electric field. Going further = ### Books *Physical Properties of Crystals* by J. F. Nye, OUP - N.B. latest edition ISBN: 0198511655 Fundamental basis for the description of material properties using tensors. A complete reference for the major tensor uses. *Tensor Properties of Crystals* by D. R. Lovett, IoP - N.B. latest edition ISBN: 0750306262 A much more concise treatment that gives a good overview of tensors in Material Science.
Aims On completion of this TLP you should: * know how to measure the Young's modulus of a material from the deflection of a cantilever beam made from the material * understand the origin of the thermal expansion of a solid * understand how the curvature of a bi-material strip is related to the stiffness of the materials from which it is made and the temperature change it undergoes * be able to use this relationship to calculate temperature changes and linear expansivities Before you start This TLP assumes that you: * Are familiar with Young's modulus as a measure of the stiffness of a solid material. A is provided. * Are familiar with the concept of the second moment of area (also called moment of inertia). * Are familiar with the vibrational atomic interpretation of temperature in a solid, and the energy-displacement relationship for atoms in a solid. Introduction This TLP is in three parts. The first part involves using the deflection of a cantiliver beam to measure the Young's Modulus of three materials. In the second part, the origin of thermal expansion in a solid is considered. The relationship between the curvature of a bimaterial strip, the stiffness and thermal expansivities of the materials from which it is made, and the temperature change undergone is examined. The boiling temperature of nitrogen is then estimated by using this relationship and measuring the change in shape of a bimaterial strip made from materials (steel and aluminium) of known thermal expansivities. In the third part, the previous experiment is repeated with a bi-material strip composed of aluminium and a polybicarbonate. The measured curvature, together with the estimated boiling temperature of nitrogen from the second part, is used to estimate the thermal expansivity of the polycarbonate. Experiment: Measurement of Young's modulus . A cantilever beam is fixed at one end and free to move vertically at the other, as shown in the diagram below. ![Diagram of cantilever beam](images/cantilever.gif) Geometry of the cantilever beam test. For each of three strips of material (steel, aluminium and polycarbonate), the strip is clamped at one end so that it extends horizontally, with the plane of the strip parallel to the plane of the bench. A small weight is hung on the free end and the vertical displacement, *δ*, measured. The value of *δ* is related to the applied load, *P*, and the Youngs Modulus, *E*, by | | | | - | - | | \[\delta = \frac{1}{3}\frac{{P{L^3}}}{{EI}}\] | (1) | where *L* is the length of the strip, and *I* the second moment of area (moment of inertia). . For a prismatic beam with a rectangular section (depth *h* and width *w*), the value of *I* is given by | | | | - | - | | \[I = \frac{{w {h^3}}}{{12}}\] | (2) | By hanging several different weights on the ends of the strips, and measuring the corresponding deflections, a graph can be can be plotted which allows the Young's modulus to be calculated. This is repeated for each of the three materials. The calculated values for the Youngs modulus may be compared with the values in this . Your browser does not support the video tag. Experiment to determine Young's modulus Results: Measurement of Young's modulus = A set of results for the steel strip are given below. | | | | - | - | | **load, m (kg)** | **deflection, *δ* (10-3 m)** | | 0 | 0 | | 0.05 | 3 | | 0.10 | 6.5 | | 0.15 | 9 | | 0.20 | 13 | | 0.25 | 16 |   A graph of these results gives: ![Graph of deflection vs load](images/graph1.gif) The gradient of this graph is \[{\rm{gradient}} = \frac{{16 \times {{10}^{ - 3}}}}{{0.25}} = 6.4 \times {10^{ - 2}}{\rm{mk}}{{\rm{g}}^{ - 1}}\] In addition | | | | - | - | | width of strip (w) = | 0.01 m | | thickness of strip (h) = | 0.001 m | | length of strip (L) = | 0.15 m | So, using equation (2) \[{\rm{gradient}} = \frac{{0.01 \times {{0.001}^3}}}{{12}} = 8.3 \times {10^{ - 13}}{{\rm{m}}^3}\]From equation (1) with *P* = *mg*, a graph of *δ* against *m* will have a gradient \[{\rm{gradient}} = \frac{1}{3}\frac{{g{L^3}}}{{EI}}\] and hence \[E = \frac{1}{3}\frac{{g{L^3}}}{{{\rm{gradient}} \times I}} = \frac{{9.8 \times {{0.15}^3}}}{{3 \times 6.4 \times {{10}^{ - 2}} \times 8.3 \times {{10}^{ - 13}}}} = 2.1 \times {10^{11}}{\rm{Pa(2s}}{\rm{.f}}{\rm{.)}}\] So the steel from which the strip is made has a Young's modulus of 210 GPa (close to the figure given in the ). Repeating the experiment for the aluminium and polycarbonate strips gives Young's moduli of 70 GPa and 5.5 GPa respectively. These results will be used later in the TLP. Simulation: Measurement of Young's modulus Origin of thermal expansion = The energy-displacement relationship for atoms in a solid is shown schematically below. ![Schematic graph of energy against interatomic separation](images/binding.jpg) Schematic depiction of the dependence of the potential energy of an atom within a solid on the inter-atomic spacing As the temperature is raised, the amplitude of vibration increases. The asymmetrical nature of the potential well means that this is accompanied by an increase in the average inter-atomic spacing for longitudinal vibrations. The Coefficient of Thermal Expansion (CTE), or thermal expansivity, *α* , is the relative change in linear dimensions, per unit of temperature change. In general, ceramics have low thermal expansivities, metals higher and polymers higher still. Thermal expansivity values for some selected engineering materials, along with some other material data, are given in this . The bi-material strip = When a component made up of two different materials bonded together is heated or cooled, a misfit is generated between the new dimensions each would adopt if they were isolated. This mismatch can set up stresses and associated distortions. On the other hand, the effect can be exploited by using such a couple to detect or measure temperature changes. An example of such a sensor is the simple bimetallic strip, which has long been used in thermostats and other thermal devices. As shown in the following diagram, if unbonded the free lengths of each material would be different after a temperature change. When bonded, however, the difference in unconstrained lengths gives rise to internal stresses within the strip, causing it to bend. ![Diagram of bi-material strip](images/bimaterial-strip1.gif) The bimaterial strip: (a) Two strips of equal initial length undergo (b) a temperature change ΔT, such that the relative difference in their unconstrained lengths is Δε(= Δα ΔT). (c) Since the two strips are in fact bonded together, the resulting internal stresses generate a uniform curvature. (d) Clamping a bimaterial strip, to allow measurement of the deflection and hence the curvature. When thermal equilibrium is reached, the resulting curvature, κ, (reciprocal of the radius of curvature) is related to the displacement, >δ, and the distance, *x*, along the strip at which the displacement is being measured by the relationship | | | | - | - | | \[\kappa = \frac{{2\sin \left[ {{{\tan }^{ - 1}}\left( {{\delta }/{x}} \right)} \right]}}{{\sqrt {\left( {{x^2} + {\delta ^2}} \right)} }}\] | (3) | This can be derived using the geometrical construction shown below. ![Geometrical construction](images/geometry.gif) Geometry for derivation of equation (3) \[\kappa = \frac{{2\sin \theta }}{{\sqrt {\left( {{x^2} + {\delta ^2}} \right)} }}\] i.e. \[\kappa = \frac{{2\sin \left[ {{{\tan }^{ - 1}}\left( {\delta }/{x} \right)} \right]}}{{\sqrt {\left( {{x^2} + {\delta ^2}} \right)} }}\] The curvature is also related to the material properties and dimensions through the equation | | | | - | - | | \[\kappa = \frac{{6{E\_A}{E\_B}\left( {{h\_A} + {h\_B}} \right){h\_A}{h\_B}\Delta \varepsilon }}{{{E\_A}^2{h\_A}^4 + 4{E\_A}{E\_B}{h\_A}^3{h\_B} + 6{E\_A}{E\_B}{h\_A}^2{h\_B}^2 + 4{E\_A}{E\_B}{h\_A}{h\_B}^3 + {E\_B}^2{h\_B}^4}}\] | (4) | where EA, EB are the Young's Moduli, and hA, hB the thicknesses of the two materials A and B. The misfit strain, Δε, is given by | | | | - | - | | Δε  = (αA - αB) Δ*T* | (5) | where αA and αB are the thermal expansivities of the constituents. Derivation of equation (4) is not particularly complex, but need not concern us (see Clyne T W, *Key Engineering Materials*, vol.116/117 (1996) p.307-330). It is based on the balancing of the bending moment generated by the misfit strain against the opposing moment offered by the beam. It can be seen that the curvature depends, not just on the expansivity mismatch and temperature change, but also on the relative stiffness and thickness of the two materials. If the two strips are of equal thickness (*h*), and the stiffness ratio is termed *E*\*, the equation can be re-written | | | | - | - | | \[\kappa = \frac{{12 \Delta \varepsilon }}{{{h\_{}}\left( {{E\_\*} + 14 + 1 / {E\_\*}} \right)}}\] | (6) | It can be seen that there is a scale effect - i.e. the curvature will be greater when the strips are thinner. Furthermore, a glance at the denominator shows that the curvature will be small if one of the materials has a much greater stiffness than the other. Assuming the α values for steel and aluminium given in the to be correct, it is possible to use eqns. (3), (5) and (6) to estimate the value of ΔT corresponding to a measured curvature of the steel - Al strip and hence estimate the boiling temperature of liquid nitrogen. Experiment: Estimating the boiling temperature of nitrogen A steel tray is used, with a transparent hinged lid, on which a scale is attached. Steel tray with a transparent hinged lid on which a scale is attached (Click on image to view larger version.) A steel-aluminium bi-material strip is used, with the two constituent strips having the same thickness and the strips are straight at room temperature. The steel-aluminium strip is fixed securely in the tray, using bolts and wing nuts, locating a spacer block between the side of the tray and the strip, as below. ![](images/bimaterial-strip2.gif) Arrangement for securing a specimen in the tray The material with the smaller value of α (steel in this case) is arranged to be closest to the wall of the tray, so that strip will curve away from the nearest wall when cooled. Liquid nitrogen is carefully poured into the tray, so that the strip is partially immersed, avoiding pouring directly onto the strip where possible. **It is IMPORTANT that safety glasses and thick gloves are worn throughout this procedure**. (Splashing of small amounts of liquid nitrogen onto clothes or skin is not particularly dangerous, but touching very cold metal with unprotected hands can cause severe injury.) The bi-material adopts a uniform curvature, arising from the difference in the free (unconstrained) contractions of each material in the strip. The deflection δ is recorded using the scale on the lid of the tray.Liquid nitrogen being poured into the tray (Click on image to view larger version.) Your browser does not support the video tag. Determining the temperature of liquid nitrogen Results: Estimating the boiling temperature of nitrogen = For the steel-aluminum bi-material strip the following measurements were made | | | | - | - | | *δ* = | 0.021 m | | *x* = | 0.189 m | From equation (3) the curvature \[\kappa = \frac{{2 \times \sin \left[ {{{\tan }^{ - 1}}\left( {0.021/0.189} \right)} \right]}}{{\sqrt {{{0.021}^2} + {{0.189}^2}} }} = 1.2\] From equation (6), the misfit strain \[\Delta \varepsilon = \frac{1}{{12}}\kappa h\left( {{E\_\*} + 14 + \frac{1}{{{E\_\*}}}} \right) = \frac{1}{{12}} \times 1.2 \times 0.001 \times \left( {\frac{{210}}{{70}} + 14 + \frac{{70}}{{210}}} \right) = 1.7 \times {10^{ - 3}}\] And so, from equation (5) \[\Delta T = \frac{{\Delta \varepsilon }}{{{\alpha \_{\rm{A}}} - {\alpha \_{\rm{B}}}}} = \frac{{1.7 \times {{10}^{ - 3}}}}{{(1.5 - 2.3) \times {{10}^{ - 5}}}} = - 213{\rm{K}}\] Given an initial room temperatue of 20ºC, this gives the boiling temperature of nitrogen as 20 - 213 = -193ºC (compare accepted figure of -196ºC). Experiment: Measuring the thermal expansivity of polycarbonate First the tray and specimen are brought back to room temperature by immersing in a bucket of water. (Warming up is complete by when the strip has become approximately straight again.) **It is IMPORTANT not to attempt to swap over specimens while the tray is cold.** The experiment is then repeated for the aluminium-polycarbonate strip. The thermal expansivity of polycarbonate can then be estimated, again assuming that the expansivity of aluminium given in the is correct, and using the value for Δ*T* obtained previously. Results: Measuring the thermal expansivity of polycarbonate = For the aluminium-polycarbonate bi-material strip the following measurements were made | | | | - | - | | *δ* = | 0.04 m | | *x* = | 0.189 m | From equation (3) the curvature \[\kappa = \frac{{2 \times \sin \left[ {{{\tan }^{ - 1}}\left( {0.04/0.189} \right)} \right]}}{{\sqrt {{{0.04}^2} + {{0.189}^2}} }} = 2.1\] From equation (6), the misfit strain \[\Delta \varepsilon = \frac{1}{{12}}\kappa h\left( {{E\_\*} + 14 + \frac{1}{{{E\_\*}}}} \right) = \frac{1}{{12}} \times 1.2 \times 0.001 \times \left( {\frac{{70}}{{5.5}} + 14 + \frac{{5.5}}{{70}}} \right) = 4.7 \times {10^{ - 3}}\] And so, from equation (5) \[{\alpha \_{\rm{B}}} = {\alpha \_{\rm{A}}} - \frac{{\Delta \varepsilon }}{{\Delta T}} = 2.3 \times {10^{ - 5}} - \frac{{4.7 \times {{10}^{ - 3}}}}{{ - 213}}4.5 \times {10^{ - 5}}{{\rm{K}}^{ - 1}}\] So the thermal expansivity of the polycarbonate material is 4.5 x 10-5K-1. Summary = In this TLP we have: 1. Learned how the Young's modulus of a material may be determined using the relationship between the deflection of a cantilever beam *δ* and the load *P* applied to it, given by \[\delta = \frac{1}{3}\frac{{P{L^3}}}{{EI}}\] where *L* is the distance from support to point of load application, *I* is the second moment of area of the beam's cross section, and *E* is the Young's modulus of the beam's material. 2. Used measurements of the deflection and load of a steel cantilever beam to determine the Young's modulus of steel as 210 GPa. 3. Looked at how the asymmetry in the energy-separation graph for atoms in a solid gives rise to an increase in the average interatomic spacing of vibrations as the temperature is increased, the origin of thermal expansion. 4. Derived a formula relating curvature *κ* to the length *x* and deflection *δ* of a bi-material strip: \[\kappa = \frac{{2\sin \left[ {{{\tan }^{ - 1}}\left( {\delta }/{x} \right)} \right]}}{{\sqrt {\left( {{x^2} + {\delta ^2}} \right)} }}\] 5. Defined the misfit strain Δ*ε* for the two components of a bi-material strip as Δε  = (αA - αB) Δ*T* and quoted a formula relating the curvature *κ* to the misfit strain, the thickness of the strips *h* (of equal thickness) and the ratio of the Young's moduli *E*\* \[\kappa = \frac{{12 \Delta \varepsilon }}{{{h\_{}}\left( {{E\_\*} + 14 + 1 / {E\_\*}} \right)}}\] 6. Used measurements of the change in shape of a steel-aluminium bi-material strip immersed in liquid nitrogen to estimate the boiling temperature of nitrogen as -193ºC. 7. Used measurements of the change in shape of an aluminium-polycarbonate bi-material strip immersed in liquid nitrogen to estimate the thermal expansivity of the polycarbonate as 4.5 x 10-5K-1. Questions = ### Quick questions*You should be able to answer these questions without too much difficulty after studying this TLP. If not, then you should go through it again!*1. Which of the following materials has the highest Coefficient of Thermal Expansion (CTE)? | | | | | - | - | - | | | a | alumina | | | b | aluminium | | | c | copper | | | d | mild steel | 2. In the experiment to determine Young's Modulus, hanging a weight on the cantilever beam leads to a measurement of vertical displacement, *d*. This value is related to the applied load and the Young's Modulus by the following equation:\[\delta = \frac{1}{3}\frac{{P{L^3}}}{{EI}}\]What does *I* represent in the equation? | | | | | - | - | - | | | a | the applied load | | | b | Young's Modulus | | | c | the second moment of area | | | d | the distance between the clamp and the position of the weight on the bimetallic strip | 3. Which of the following materials has the higher value of Young's Modulus? | | | | | - | - | - | | | a | alumina | | | b | aluminium | | | c | copper | | | d | mild steel | 4. What do *α*A and *α*B represent with respect to materials A and B in the following equation which calculates the misfit strain?Δ*ε* = (*α*A - *α*B)Δ*T* | | | | | - | - | - | | | a | the thickness of the two materials | | | b | the thermal diffusivities of the materials | | | c | temperature changes | | | d | thermal expansivities |### Deeper questions*The following questions require some thought and reaching the answer may require you to think beyond the contents of this TLP.*5. Explain in terms of atomic structure and bonding why polymers tend to have lower stiffnesses and higher expansivities than metals and ceramics. 6. From the data given in the suggest a suitable pair of metals for the construction of a bimetallic strip. What other factors do you think might in practice be relevant for making such a choice? Going further = ### Websites From , the page considers Young's modulus in the context of elasticity theory.From the thermodynamics part of the website based at Georgia State University. Includes a section on , with a video clip of a bimetallic strip being dipped into liquid nitrogen.From the website.
Aims On completion of this TLP package, you should: * Understand the basic mechanisms and models of thermal and electrical conduction, in metals and non metals. * Be aware of some of the factors that affect both types of conduction. * Know some of the applications for both types of conductors and insulators. Before you start This TLP is an introduction, so no specific prior knowledge is required. There are, however, other TLPs that cover more advanced topics, such as semiconductors, which are linked to in the further reading section. Introduction Electrical conductivity spans an incredibly large order of magnitudes (30!) from insulators to metals, and can even be infinite in superconductors. Knowledge of  how to control it has allowed for the computer revolution, and ever increasing miniaturisation Thermal conductivity, while only spanning arround 10 orders of magnitude for known materials, is still crucial for many important technological advancements, from jet turbines and space travel to USB drinks coolers. To truly appreciate these achievements, it is vital to have an understanding of how conductivity arises in materials. There are simple models that can be used to predict the behaviour of many materials; close parallels exist between thermal and electrical conduction in metals, whereas the conduction mechanisms in non-metals are quite different. Introduction to conduction Electrical conduction - It is important to not get confused by conduction, conductivity, resistance, and resistivity. The materials properties are electrical conductivity, σ , and electrical resistivity, ρ The electrical conductivity of a material is defined as the amount of electric charge transferred per unit time across unit area under the action of a unit potential gradient: J = σ E where J is the current density (current per unit area) and E is the potential gradient. This is another way of expressing Ohms law, which is more commonly stated as \( V = I R \). For an isotropic material: \[ \sigma = \frac 1 \rho \] The units of electrical resistivity are the ohm metre (**Ωm**), and for conductivity, the inverse (**Ω-1 m-1** ). For an actual sample of length l, and cross sectional area A, the resistance, R, is calculated by : \[ R = \rho \frac l A \] Electrical signals propagate at close to the speed of light, though this does **not** mean the electrons themselves move this quickly. Instead, the typical electron *drift velocity* (their average velocity) is much lower: less than 1 mm s-1. This is expanded upon in the Drude model section. Another pertinent reminder is that of potential and current – current is the flow of electrons, and potential is the driving force that makes them flow. With sufficient potential, electrons may carry charge through any material, including a vacuum (see CRT), though they are powerless without any net current flow. The best electrical conductors (apart from superconductors) are pure copper and pure silver, with resistivities of 16.78 and 15.87  nΩm respectively.  For comparison, polystyrene has a resistivity of up to 1028 nΩm, 27 orders of magnitude different! Thermal conduction: - To understand thermal conductivity in materials, it is important to be familiar with the concept of heat transfer, which is the movement of thermal energy from a hotter to a colder body. It occurs in several circumstances: * When an object is at a different temperature from its surroundings; * When an object is at a different temperature to another object in contact with it; * When a temperature gradient exists within the object. The direction of heat transfer is set by the second law of thermodynamics, which states that the entropy of an isolated system which is not in thermal equilibrium will tend to increase over time, approaching a maximum value at equilibrium. This means heat transfer always occurs from a body at a higher temperature to a body at a lower temperature, and will continue until thermal equilibrium is reached.  A transfer of thermal energy occurs only through 3 modes: conduction, convection, and radiation. Each mode has a different mechanism and rate of heat transfer, and thus, in any particular situation, the rate of heat transfer depends on how much a certain mode is prevalent.   **Conduction** involves the transfer of thermal energy by a combination of diffusion of electrons and phonon vibrations – applicable to solids. **Convection** involves the transfer of thermal energy in a moving medium – the hot gas/liquid moves through the cooler medium(normally due to density differences). **Radiation** involves the transfer of thermal energy by electromagnetic radiation. The sun is a good example of energy transfer through a (near) vacuum. This TLP focuses on conduction in crystalline solids. Thermal conductivity, **Κ,** is the materials property that indicates the ability to conduct heat. Fouriers first law gives the heat flux as proportional to the temperature difference, surface area, and length of the sample: \[ H = \frac{\Delta Q}{\Delta t} = \kappa A\frac {\Delta T}{l}\] where ΔQ / Δt is the rate of heat transfer, A is the surface area and l is the length. The best metallic thermal conductors are pure copper and silver. At room temperature, commercially pure copper typically has a conductivity of about 360 Wm-1K-1 (although the thermal conductivity of a single crystal of copper was measured at 12,200 Wm-1K-1 at a temperature of 20.8 K). In metals, the movement of electrons dominates the conduction of heat. The bulk material with the highest thermal conductivity (aside from the superfluid helium II) is, perhaps surprisingly, a non-metal: pure single crystal diamond, which has a thermal conductivity at room temperature of around 2200 Wm-1K-1. The high conductivity is even used to test the authenticity of a diamond. Strong covalent bonds within the molecule are responsible for the high conductivity even though there are no free electrons, heat is conducted by phonons. Most natural diamonds also contain boron atoms that replace carbon atoms in the crystal matrix, which also have high thermal conductance. Metals: the Drude model of electrical conduction Due to the quantum mechanical nature of electrons, a full simulation of electron movement in a solid (i.e. conduction) would require consideration of not only all the positive ion cores interacting with each electron, *but also each electron with every other electron*. Even with advanced models, this rapidly becomes far too complicated to model adequately for a material of macroscopic scale. The Drude model simplifies things considerably by using classical mechanics and treats the solid as a fixed array of nuclei in a ‘sea of unbound electrons. Additionally, the electrons move in straight lines, do not interact with each other, and are scattered randomly by nuclei. Rather than model the whole lattice, two statistically derived numbers are used: **τ**, the average time between collisions (the **scattering time**), and **l**, the average distance traveled between collisions  (the **mean free path**) Under the application of a field, E, electrons experience a force –e E, and thus an acceleration from F = m a For an electron emerging from a collision with velocity v0, the velocity after time t is given by: \[v =v\_{0} - \frac{eEt}{m} \] Of course, if the electrons are scattered randomly by each collision, v0 will be zero. If we also consider the time t = τ, an equation for the **drift velocity** is given:\[v =\frac{-eE\tau}{m} \] For **n** free electrons per unit volume, the current density J is: J = -n e v Substituting v for the drift velocity: \[J = \frac {ne^{2}\tau E}{m} \] The conductivity σ = n e μ, where μ is the **mobility**, which is defined as \[ \mu = \frac{|v|}{E} = \frac{eE\tau}{mE} = \frac{e\tau}{m} \]The net result of all this maths is a reasonable approximation of the conductivity of a number of monovalent metals. At room temperature, by using the kinetic theory of gases to estimate the drift velocity, the Drude model gives σ  ~ 106 Ω-1 m-1. This is about the right order of magnitude for many monovalent metals, such as sodium (*σ*  ~ 2.13 **×** 105 Ω-1 m-1). The Drude model can be visualised using the following simulation. With no applied field, it can be seen that the electrons move around randomly. Use the slider to apply a field, to see its effect on the movement of the electrons. However, it is important to note that for non-metals, multivalent metals, and semiconductors, the Drude model fails miserably. To be able to predict the conductivity of these materials more accurately, quantum mechanical models such as the Nearly Free Electron Model are required. These are beyond the scope of this TLP Superconductors are also not explained by such simple models, though more information can be found at the . Factors affecting electrical conduction =Electrical conduction in most metallic conductors (not semiconductors!) is straightforward to approximate. There are three important cases: Pure and nearly pure metals - For pure metals at around room temperature, the resistivity depends linearly on temperature. \[ \rho\_2 = \rho\_1 [1 + \alpha(T\_2 - T\_1)]\] However, at low temperatures, the conductivity ceases to be linear (superconductors are dealt with separately), and resistivity is related to temperature by Matthiesens rule: \[ \rho(T) = {\rho \_{{\rm{defect}}}}+ {\rho \_{{\rm{thermal}}}} \] ![](images/matthiessens_1.png) The low temperature resistivity ( \({\rho \_{{\rm{defect}}}}\) )depends on the concentration of lattice defects, such as dislocations, grain boundaries, vacancies, and interstitial atoms. Consequently, it is lower in annealed, large crystal metal samples, and higher in alloys and work hardened metals. You might think that at higher temperatures the electrons would have more energy to be able to move through the material, so perhaps it is rather surprising that resistivity increases (and conductivity therefore decreases) as temperature increases. The reason for this is that as temperature increases, the electrons are scattered more frequently by lattice vibrations, or phonons, which causes the resistivity to increase. This contribution to the resistivity is described by ρ**thermal**. The temperature dependence of the conductivity of pure metals is illustrated schematically in the following simulation. Use the slider to vary the temperature, to see how the movement of the electrons through the lattice is affected. You can also introduce interstitial atoms by clicking within the lattice. Alloys - Solid solution - As before, adding an impurity (in this case another element) decreases the conductivity. For a solid solution, the variation of resistivity with composition is given by Nordheims rule: \[ \rho = \chi\_{\alpha}\rho\_{\alpha} +  \chi\_{\beta}\rho\_{\beta} +  C\chi\_{\alpha}\chi\_{\beta} \] where C is a constant and CA and CB are the atomic fractions of the metals A and B, whose resistivities are ρA and ρB respectively. Further, the difference in valency between the bulk lattice and the impurity atoms is proportional to the difference in resistivity -  Lindes rule. \[\Delta \rho \propto  (\Delta Z)^2 \] where ΔZ is the difference in valence between the solute and the solvent. Thus, solute atoms with a higher (or lower) charge than the lattice will have a greater effect on the resistivity. Alloys- many phases - For an alloy where there are two or more distinct phases, the contributions simply contribute linearly to the total resistivity (though the effect of  many grain boundaries increases resistivity slightly). \[ \rho = \chi\_\alpha\rho\_\alpha +  \chi\_\beta\rho\_\beta \] The following animation illustrates Mattheisens rule, Nordheims rule and the mixture rule. Thermal conduction metals = Metals typically have a relatively high concentration of free conduction electrons, and these can transfer heat as they move through the lattice. Phonon-based conduction also occurs, but the effect is swamped by that of electronic conduction. The following simulation shows how electrons can conduct heat by colliding with the nuclei and transferring thermal energy. Click the “source” button to apply a heat source to one side of the sample. The graph will show the thermal gradient within the sample, and you can also apply a heat sink to the opposite side of the sample using the “sink” button. Wiedemann-Franz law - Since the dominant method of conduction is the same in metals for thermal and electrical conduction (i.e. electrons!), it makes sense that there is a relationship between the two conductivities. The **Wiedemann-Franz law** states that the ratio of thermal conductivity to the electrical conductivity of a metal is proportional to its temperature. \[LT = \frac{\kappa }{\sigma }\] Where L the proportionality constant (also known as the Lorenz number), is: \[L = \frac{\kappa }{{\sigma T}} = 2.45 \times {10^{ - 8}}W\Omega {K^{ - 2}}\] The law can be explained by the fact that free electrons in the metal are involved in the mechanisms in both heat and electrical transport. The thermal conductivity increases with the average electron velocity since this increases the forward transport of energy. However, the electrical conductivity decreases with an increase in particle velocity because the collisions divert the electrons from forward transport of charge. ![Wiederman graph](images/weidemann_graph.png) Electrical conduction: non metals = Although the drude model works reasonably well for monovalent metals, it does not predict the properties of semiconductors, superconductors, or non-metallic conductors.   and are best explained in their own TLPs. Ionic conduction For certain materials, there is no net movement of electrons, yet they still conduct electricity. The mechanism is that of ionic conduction, whereby some charged ions can move through the bulk lattice (by the usual diffusion mechanisms, except with an electric field driving force). Such ionic conductors are used in solid oxide fuel cells – though for the example of yttria stabilised zirconia (YZT), operational temperatures are between 500 and 1000 degrees C. Because they conduct by a diffusion like mechanism, higher temperatures lead to higher conductivity, the reverse of what the simple Drude model would predict. Breakdown voltage There is an important, and potentially lethal mechanism by which an insulator can become conductive. In air, it may be commonly recognised as lightning. Of note is that the mechanism can ionise the ‘insulator, leaving it temporarily more conductive.. Gases are commonly ionised in domestic lighting devices. The most common are fluorescent tubes and neon lights. To initially excite the mercury vapour in a fluorescent tube type light, a voltage spike exceeding the breakdown voltage is needed. This can be noticed when switching such a light on as a sudden ignition, with an associated radio interference spike. A faulty tube may not fully ionise, leading to only a small glow at the ends. ![](images/breakdown.jpg) Under high voltages, even plexiglass may conduct. The temporarily ionised path is opaque on cooling, giving a Lichtenberg figure in this case. *Image “Lichtenberg figure” by* *Bert Hickman* More information is available in the Non-metals: thermal phonons = As mentioned previously, metals have two modes of thermal conduction: electron based and phonon based. For non metals, there are relatively few free electrons, so the phonon method dominates. Heat can be thought of as a measure of the energy in the vibrations of atoms in a material. As with all things on the atomic scale, there are quantum mechanical considerations; the energy of each vibration is quantised (and proportional to the frequency). A phonon is a quantum of vibrational energy, and by the combination (superposition) of many phonons, heat is observed macroscopically. The energy of a given lattice vibration in a rigid crystal lattice is quantised into a quasiparticle called a **phonon**. This is analogous to a photon in an electromagnetic wave; thermal vibrations in crystals can be described as thermally excited phonons, which can be related to thermally excited photons. Phonons are a major factor governing the electrical and thermal conductivities of a material. A phonon is a quantum mechanical adaptation of normal modal vibration in classical mechanics. A key property of phonons is that of wave-particle duality; normal modes have wave-like phenomena in classical mechanics but gain particle-like behaviour under quantum mechanics. The energy of a phonon is proportional to its angular frequency ω: \[\varepsilon = (n + \frac{1}{2})\hbar \omega \] with quantum number *n*. The term \(\frac{1}{2}\hbar \omega \) is the zero point energy of the mode. This is defined as the lowest possible energy that the system possesses and is the energy of the ground state. If a solid has more than one type of atom in the unit cell, there will be two possible types of phonons: “acoustic” and “optical” phonons. The frequency of acoustic phonons is around that of sound, and for optical phonons, close to that of infrared light. They are referred to as optical because in ionic crystals they are excited easily by electromagnetic radiation. If a crystal lattice is at zero temperature, it lies in its ground state, and contains no phonons. When the lattice is heated to and held at a non-zero temperature, its energy is not constant, but fluctuates randomly about some mean value. These energy fluctuations are caused by random lattice vibrations, which can be viewed as a gas of phonons. Because the temperature of the lattice generates these phonons, they are sometimes referred to as **thermal phonons**. Thermal phonons can be created or destroyed by random energy fluctuations. It is accepted that phonons also have momentum, and therefore can conduct energy through the lattice. Unlike electrons, there is a net movement of phonons - from the hotter to the cooler part of the lattice, where they are destroyed. Electrons must maintain charge neutrality in the lattice, so there is no net movement of electrons during thermal conduction. The following simulation shows schematic optical and acoustic phonons in a 2D lattice, and has the option to animate a 2D wavevector defined by clicking inside the green box. Umklapp scattering When two phonons collide, the resulting phonon has the vector sum of their momenta. The way of treating particles moving in a lattice quantum mechanically under the reduced zone scheme (which is beyond the scope of this TLP but is explored in more depth in the TLP), leads to a conceptually strange effect. If the momentum is too great (outside the first Brillouin zone) then the resulting phonon moves in almost the opposite direction. This is **Umklapp scattering**, and is dominant at higher temperatures- acting to reduce thermal conductivity as the temperature increases. ![Diagram showing umklapp scattering](images/umklapp.png) Applications Silicon chips - As electrical properties vary with microstructure, a type of computer memory called phase-change random-access memory (PC-RAM) has been developed. The material used is a chalcogenide reffered to as GST (Ge2Sb2Te5).  The amorphous state is semiconducting, while in a (poly)crystalline form it is metallic. Heating above the glass transition, but below the melting point, crystallises a previously semiconducting amorphous cell. Likewise, fully melting, then rapidly cooling a cell leaves it in the metallic crystalline state. This variation of resistivity with microstructure is crucial to the operation of such devices. By varying the heating conditions, a varying proportion of each GST cell may be crystalline and amorphous - the mixture rule applies as its effectively two phases. This allows for multiple distinguishable levels of resistance per cell, increasing the storage density, and reducing the cost per megabyte.   ![](images/pram.jpg) The more common problem with silicon devices is dissipating heat. A modern processor has a thermal design power of above 70w (Intel i7 3770, 22 nm process). A cooler must dissipate that specified amount of heat from the dies surface, which is typically less than 10 cm2. It is common for heat sinks to have a copper block attached to the microprocessor casing by thermal paste, and pressure. The bulk of the heat sink is usually made from much cheaper aluminium, though the high thermal conductivity of copper is necessary for the interface. Thermal paste, whilst a better thermal conductor than air, is much worse than most metals, so it is only used as a thin layer to replace air gaps. ![](images/heatsink.jpg) Conduction is not the most efficient method to carry heat to a separate heat sink, so convection and the latent heat of evaporation can be used. Heat pipes, typically made from copper are filled with a low boiling point liquid, which boils at the hot end, and condenses at the cool end of the pipe. This is a much faster way of transferring heat over longer distances. Space - There are many applications of thermal insulators, with development coming from attempts to improve bulk mechanical properties, while retaining insulating properties, (i.e. Dont let heat through, but dont melt) A particularly famous application of thermal insulation is the (now retired) space shuttle tiles which are responsible for protecting the shuttle during re-entry into the atmosphere. They are such good insulators, that the outside may glow red-hot, while inside the shuttle the astronauts are still alive. One of the best thermal insulators is silica aerogel. An aerogel is an extremely low-density solid-state material made from a gel where the liquid phase of the gel has been replaced with gas. The result is an extremely low density solid, which makes it effective as a thermal insulator. One use of aerogels is for a lightweight micrometeorite collector, aerogel was used. While extremely light, it is strong enough to capture micrometeors. ![Image showing aerogel in use](images/aerogel.jpg) Matches stay cool millimetres from a blowtorch, a large array of aerogel bricks is ready to be launched into space, and the resulting space dust is photographed upon return to earth Aerogels can be made from a variety of materials, but share a universal structure style. (amorphous, open-celled “nanofoams”). However, a common material used is silicate. Silica aerogels were first discovered in 1931. Aerogels have extreme structures and extreme physical properties. The highly porous nature of an aerogel structure provides a low density. The percentage of open space within an aerogel structure is about 94% for a gel with a density of 100 kg m-3. Aerogels are good thermal insulators because they eliminate the three methods of heat transfer (convection, conduction and radiation). They are good convective insulators due to the fact that air cannot circulate throughout the lattice. Silica aerogel is an especially good conductive insulator because silica is a poor conductor of heat - a metallic aerogel, on the other hand, would be a less effective insulator. Carbon aerogel is an effective radiative insulator because carbon is able to absorb the infrared radiation that transfers heat. Hence, for maximum thermal insulation, the best aerogel is silica doped with carbon. Power transmission One of the largest scale uses of electrical conductors is in power transmission. Unfortunately, the properties that are desirable for a strong cable seem opposed to those for a good conductor. Aluminium alloys can be very strong for their density, but following Nordheims rule, are much poorer conductors. There are a huge variety of steels, but again, the interstitial carbon atoms increase the resistance compared to pure iron. This means that a larger diameter cable is needed, which, due to the density of steel, ends up being very heavy and expensive. Heavier cable also means we must construct additional pylons, which is a large component of the cost. Copper, while appropriate for home wiring, is dense, and increasingly expensive. For most overhead power cables, the solution is to use two materials – a steel core, surrounded by many individual aluminium cores. This achieves light, high strength, and acceptable conductivity cables. Superconductors have been trialled for power transmission, though only underground, and at a considerably higher cost (and efficiency!). Thermoelectric effect - The thermoelectric effect is the direct conversion of a difference in temperature into electric voltage and vice versa. Simply put, a thermoelectric device creates a voltage when there is a different temperature on each side of the device. It can also be run “backwards”, so when a voltage is applied across it, a temperature difference is created. This effect can be used to generate electricity, to measure temperature, to cool objects, or to heat them. Because the sign of the applied voltage determines the direction of heating and cooling, thermoelectric devices make very convenient temperature controllers. The Peltier effect is that when a (direct) current flows through a metal-semiconductor junction, and heat is either absorbed or released. This is because the average energy of electrons in the two materials is different, and heat makes up this difference. A fuller understanding requires knowledge of the band structure, explored further in the . Summary = We have now gone over the foundation behind electrical and thermal conduction, as well as some of the more common applications. You should understand the role of electrons and phonons in thermal conduction, as well as how the interactions between them lead to changes in electrical conductivity with temperature. You should appreciate that metals have more heat transfer mechanisms than their non-metal counterparts, therefore explaining why they have higher thermal conductivity. Also, this TLP should have touched on some of the major applications of thermal and electrical conductors and insulators. Finally, the connections between thermal and electrical conductivity in metals have been made, including the Wiedemann-Franz Law. To summarise the factors affecting conductivity: * Temperature – as temperature increases, the average energy per phonon increases, and by the umklapp scattering mechanism, thermal conductivity is decreased. Phonons also scatter electrons more. * Electron density (in metals) – if electrons are the conductors, more (valence) electrons usually leads to better conduction. * Alloying – interstitials scatter electrons, and decrease conductivity. Phase boundaries, impurities, dislocations, etc. decrease conductivity, even at low temperature. Questions = ### Quick questions*You should be able to answer these questions without too much difficulty after studying this TLP. If not, then you should go through it again!*1. For phonons, the normal modes | | | | | - | - | - | | | a | Gain particle like behaviour under quantum mechanics. | | | b | Gain wave like behaviour under quantum mechanics. | | | c | Adopt both particle and wave like behaviour under quantum mechanics. | 2. Using the assumptions in the Free electron model, how do crystal lattices affect electrons? | | | | | - | - | - | | | a | The lattice is not taken into account, lattice imperfections and defects are ignored. | | | b | The lattice is not taken into account, but lattice imperfections and defects may scatter electrons. | | | c | The lattice and any defects are taken into account and are able to scatter electrons. | 3. Umklapp scattering is: | | | | | - | - | - | | | a | when two phonons scatter and create a third phonon with a momentum k-vector inside the first Brillouin zone and the total phonon momentum remains the same. | | | b | when two phonons scatter and create a third phonon with a momentum k-vector inside the first Brillouin zone and the total phonon momentum changes. | | | c | when two phonons scatter and create a third phonon with a momentum k-vector outside the first Brillouin zone and the total phonon momentum changes. | 4. According to the Wiedemann-Franz law, which of the following is true? | | | | | - | - | - | | | a | thermal conductivity is inversely proportional to electrical conductivity in all materials. | | | b | thermal conductivity is proportional to electrical conductivity in all materials. | | | c | thermal conductivity is inversely proportional to electrical conductivity in metals. | | | d | thermal conductivity is proportional to electrical conductivity in metals. | 5. Which of the following statements about electrical conduction in nearly pure materials are true? | | | | | - | - | - | | | a | At low temperatures, resistivity decreases to zero as the lattice no longer interferes with electron motion. | | | b | At low temperatures, conductivity decreases to a minimum based on residual lattice defects. | | | c | Dislocations and grain boundaries provide a low resistance route for electrons to travel through a material. | | | d | At higher temperatures, the scattering effect of thermal phonons swamps that of residual lattice defects. | | | e | At low temperatures, conductivity increases with the addition of high valency atoms to the bulk lattice, as they provide more electrons to the lattice. | | | f | At low temperatures conductivity does not increase beyond a maximum due to imperfections in at the lattice scattering electrons. | 6. Which of these is the correct order in terms of best to worst electrical conductivity (assumed pure materials) ? | | | | | - | - | - | | | a | Nb3Sn at 4K, Ag at 300K, Au at 300K, Nb3Sn at 300K, Cu at 300K. | | | b | Ag at 300K, Cu at 300K, Nb3Sn at 4K , Au at 300K, Nb3Sn at 300K. | | | c | Nb3Sn at 4K, Ag at 300K, Cu at 300K , Au at 300K, Nb3Sn at 300K. | | | d | Nb3Sn at 300K, Cu at 300K, Ag at 300K, Au at 300K, Nb3Sn at 4K. | | | e | Nb3Sn at 4K, Cu at 300K, Nb3Sn at 300K, Ag at 300K, Au at 300K. | Going further = ### Books The NST IB chemistry A course, and/or the NST IB Physics A course also cover conduction in more depth. ### Websites * The TLP on and the TLP on both cover electrical conduction in special cases. * are covered in their own TLP, and help explain Umklapp scattering.
Aims On completion of this TLP you should: * understand that surfaces that feel smooth to the touch and look smooth to the naked eye actually are not on a fine scale * appreciate that two nominally flat surfaces are only in contact at certain points ("asperities") * gain a more in-depth knowledge of friction and why it occurs * know about different types of lubricant and why lubrication affects friction * be aware of different types of wear processes Before you start You should have a basic understanding of frictional forces in everyday life. There are numerous examples of the consequences of friction, for example: * throwing a ball and seeing it come to rest; * slipping when walking on ice; * dragging a table across a floor; * walking on a carpet. Introduction It is often desirable for frictional forces and wear rates to be low, because friction increases the work needed to achieve a task and wear is detrimental to component performance and lifetime. However, not all engineered materials need to have low friction and low wear rates. High friction between shoes and the floor is desirable when walking, and high wear rates are beneficial in metallographic specimen preparation (grinding away and then polishing a surface). Surface topography Flat surfaces polished to a mirror finish are not truly flat on an atomic scale, as shown below. ![](images/topography_surface.jpg) ![https://upload.wikimedia.org/wikipedia/commons/thumb/3/39/Mechanical_filtering_of_surface_finish_trace.svg/2000px-Mechanical_filtering_of_surface_finish_trace.svg.png](images/topography_profile.jpg) Surface roughness can be quantified using a *stylus profilometer*, where a fine stylus is moved over the surface. As it does so it rises and falls, giving the surface profile. This method will however produce some smoothing of the true profile, because of the finite dimensions of the stylus tip. This can be seen in the animation below. In the , a similar concept is demonstrated.From a profilometer trace, the average roughness, \({R\_{\rm{a}}}\), is defined as \({R\_{\rm{a}}} = \frac{1}{L}\int\limits\_0^L {{\rm{ }}\left| {y(x)} \right|} {\rm{ d}}x\) where y(x) is the height of the surface at x above the mean line and L is the overall length of the profile. The mean line is defined by having equal areas of the profile above and below it. An exaggerated example is shown below. ![](images/topography_rough_surface.jpg) For metals, polished surfaces typically have \({R\_{\rm{a}}}\) values of 0.1–0.4 mm. Contact between macroscopic ‘flat surfaces - Contact between surfaces occurs only at certain points, called asperities (circled below)*.* **Frictional force and wear originates at these asperities****.** These only cover a very small fraction of the total surface area - typically < 1%, but this will vary with factors such as the load on the surfaces. For contact between rubbery plastic surfaces and nominally smooth surfaces (e.g., polished glass), the contact area can approach that of the nominal area. Asperity contact can be either plastic (most metals) or elastic (most plastics and ceramics) and this cannot be altered by changes in the load. Due to asperities the true area of contact is much less than the nominal area of contact.The **true area of contact** is related to the frictional force, so it is useful to be able to find an estimate for it. Click the link below for the . ![](images/topography_asperities.jpg) This concept of asperities is demonstrated by ***Newtons rings***:   ![](images/topography_newton_rings1.jpg)   A transparent rubbery polymeric phone casing was placed on a mobile phone with a glass back. In some regions concentric rings of different colours were visible - Newtons rings. The images shown were obtained by viewing this with an optical microscope in reflected light mode.   ![](images/topography_newton_rings2.jpg)   The rings appear where there is dust between the polymer and the glass surfaces. There is substantial contact between the two surfaces here but there is an air gap where adjacent regions on the polymer and the glass are not in contact . This causes the appearance of Newtons rings.   ![](images/topography_newton_rings3.jpg)   The formation of Newtons rings can be understood by considering a plano-convex lens placed on a glass slide (effectively a single asperity). An air film of increasing thickness away from the contact point is formed. If white light is used concentric rings of different colours are seen, as is shown in the images.   Newtons rings are formed due to interference between light waves reflected from the top and bottom surfaces of the air film (see below). The bright rings are where the waves superpose in phase (path difference of a whole number of wavelengths) and the dark fringes are where the waves superpose in antiphase (path difference is an odd number of half wavelengths). See the birefringence pages in the liquid crystals TLP for a more in-depth explanation of a very similar concept. The rings are circular when the lens is perfectly spherical – which, as the images above show, can be a good first approximation of real situations. ![https://upload.wikimedia.org/wikipedia/commons/3/38/Newtonringar1.png](images/topography_waves.jpg)   ![https://upload.wikimedia.org/wikipedia/commons/thumb/4/42/Optical_flat_interference.svg/2000px-Optical_flat_interference.svg.png](images/topography_waves2.jpg) Friction - recap Friction is the resistance encountered by one body moving over another. As seen previously when two solid surfaces are placed together, contact will occur only at asperities. Frictional forces and wear originate due to the interlocking of asperities because, in order for the surfaces to move relative to each other, asperities must deform and/or fracture, and adhesive forces must be overcome. In general, the greater the proportion of the surface that is in asperity contact, the greater the frictional force. ![](images/friction_apparent_contact.jpg)   ![](images/friction_real_contact.jpg) These diagrams show that the true area of contact is far less than the apparent area of contact (asperities circled). Box on a slope: - ![](images/friction_slope_box.jpg) For the above block to be able to slide to the right, the applied horizontal force must be greater than the frictional force, F. Now, \[F \le {\rm{  }}\mu N\] where N is the normal load and μ is the coefficient of friction. It is a common observation that frictional force needed to initiate motion is greater than that needed to maintain it, i.e., μstatic > μdynamic. The higher the value of μ, the steeper the slope can be for the box to remain stationary: ![](images/friction_box_slope.jpg) When the object is stationary: Resolving parallel to the normal force, N, we have: N = W cos θ Resolving parallel to the frictional force, F, we have: F = W cos (90° - θ) = W sin θ Hence, \[\frac{F}{N} = \tan \theta \] and so the limiting angle at which the object remains on the slope, θcrit, therefore determines the coefficient of friction, μ: μ = tan θcrit[An analogous problem arises when leaning a uniform ladder against a smooth vertical wall where the bottom of the ladder is in contact with rough ground.] Friction - properties of the coefficient of friction, μ = Experimental data showing the **invariance of the coefficient of friction with the apparent area of contact** for wooden sliders on an unlubricated steel surface (Hutchings, p. 24; from E. Rabinowicz, *Friction and Wear of Materials*, 1965). ![Fig](images/friction_brick_large_side.jpg) This can be understood by considering a brick on a table. When the large side of the brick is in contact with the table the apparent contact area is large and there are many asperities that support the load. When the smaller side is in contact the apparent contact area is lower and there are fewer asperities. This causes a greater effective normal load on each asperity (same weight but fewer asperities) which results in the area of each asperity being larger. This means the true contact area is essentially the same, hence so is the friction. ![Fig](images/friction_steel_AL.jpg) Experimental data showing the **invariance of the coefficient of friction** **with** **normal load** for the unlubricated sliding of steel on aluminium. **μ** **is unaffected but the frictional force increases.** However there are special cases where μ experiences a transition in value as the normal load changes: ![Fig](images/friction_Cu_Cu.jpg) The variation of the coefficient of friction with applied normal load for copper sliding against copper. At low loads, the two metal surfaces are separated by thin oxide films. At high loads, metallic contact occurs between copper asperities as the oxide films are penetrated (Hutchings, p. 37; from J.R. Whitehead, ‘Surface deformation and friction of metals at light loads, *Proc. Roy. Soc. Lond*. **A201**, 109-124, 1950). A high coefficient of friction results because of the plastic deformation of the contacting metallic surfaces. **μ** **is independent of load except at the transition region.** ![Fig](images/friction_Bi_Cu.jpg)  The effect of sliding speed on the coefficient of friction for pure bismuth and pure copper sliding against themselves (Hutchings, p. 42; from F.P. Bowden and D. Tabor, *The Friction and Lubrication of Solids, Part II*, 1964). At very high speeds, the dissipation of frictional work can raise the temperature at the interface to beyond the melting point of the material involved. Sliding then takes place under hydrodynamic lubrication conditions (see lubrication).   Typically, μ » 0.4-1.5 for one metal sliding against another. Friction theory = The coefficient of friction, μ, is determined by the behaviour of asperity contacts. Adhesive forces which develop at asperity contacts and deformation forces which are needed to plough the asperities of the harder surface through the softer surface are important. Adhesion arises from the attractive forces which are assumed to operate at asperity contacts. Adhesive forces between metals can be greater than the cohesive forces in the softer metal – this is important for wear as it can result in material being removed from the softer surface. Ploughing forces arise since asperities will deform when the surfaces move relative to one another. The animation shows plastic deformation occurring, so applies to most metals. In addition, an important factor in the friction of ceramics is the extent of fracture on the sliding surfaces. Fracture leads to increased friction, since it provides an additional mechanism for the dissipation of energy.  Oxide films affect μ. Friction between oxide surfaces, or between oxide and bare metal is almost always less than between surfaces of bare metal. Strength and thickness of oxide films are therefore important, as a weaker oxide film which can be sheared more easily will give a low μ, and a thicker film will make contact between the metals themselves less likely and so a lower μ is more probable.  These models give similar results, the key points being that: \[\mu  \approx \frac{1}{6}\] and \[\mu  \propto \frac{{{\tau \_{\rm{i}}}}}{{{\sigma \_{\rm{y}}}}}\] A consequence of these models is that films of low shear strength deliberately interposed between the surfaces lower μ considerably – this is the principle behind **lubrication**. Lubrication - introduction and types of lubricants Lubrication is the process of reducing friction between touching surfaces moving relative to each other by introducing a lubricant between the surfaces, which is a material with a lower shear strength than the surfaces. Lubricants do not necessarily completely prevent asperities, but they reduce their number and weaken their junctions. So lubrication also reduces the rate of sliding wear. μ for many dry engineering materials is rarely below 0.5 and in most cases is significantly higher. Such high values would lead to large frictional forces and hence energy losses (and almost certainly high wear rates). With lubrication μ can be very low (» 0.001), which is why lubricants are widely used. Good lubricants have high pour points (the lowest temperature at which an oil will flow), high viscosity indices (see later) and good resistance to oxidation. Types of Lubricant: - **Mineral Oils:** Commercial mineral oils are based on several different hydrocarbon species with mean molecular weights between 300 and 600. Examples are pariffinic oils, which have a predominance of paraffin-like species, i.e., long-chained hydrocarbons with either straight or branched chains, as shown schematically below. ![https://upload.wikimedia.org/wikipedia/commons/thumb/1/17/Hentriacontane.svg/270px-Hentriacontane.svg.png](images/lubrication_long_chain_hc.jpg) **Synthetic Oils:** These have fewer impurities than mineral oils, but are significantly more expensive. They are used when relatively high or low temperatures or loads are to be experienced in service, or if low flammability is essential. Examples are synthetic hydrocarbon oils (SHCs) and silicones (below). ![https://upload.wikimedia.org/wikipedia/commons/thumb/7/78/PDMS.svg/220px-PDMS.svg.png](images/lubrication_silicone.jpg) **Solid Lubricants:** ![Fig](images/lubrication_graphite.jpg) These can be used at higher temperatures. They have a layered structure with weak intermolecular forces between layers, allowing them to slide easily relative to each other (low shear strength) thus giving lubricant properties. To work best the layers should be oriented parallel to the surface and in the direction that the motion will occur, so that on movement the layers can slide over each other easily. In the adjacent diagram the crystal structures of two common solid lubricants are shown: (a) graphite and (b) molybdenum disulphide. Solid lubricants can be used to produce ‘self-lubricating systems which do not need an external source of lubrication during the lifetime of the system. Also, they are particularly useful in vacuum technology and space applications, because they do not evaporate away. Viscosity - The most important property of an oil for lubricating purposes is its viscosity. Viscosity provides a measure of the resistance of a fluid to shearing flow. Lubrication - additives = Additives to oils: Additives either prolong the life of a lubricant or increase its viscosity index (VI), preventing the oil from becoming too thin at high temperatures or too greasy at low temperatures. Example include: * Viscosity-index improvers:  oil-soluble long-chain polymers which increase the VI by decreasing the viscosity at low temperatures (pour-point depressants - prevent the oil from becoming too viscous at lower temperatures) or increasing viscosity at high temperatures. * Extreme pressure (EP) additives: react with the sliding surfaces under the service conditions giving compounds with low shear strength which behave as thin lubricating films, partially separating asperities and preventing them from welding together. They usually contain sulphur or chlorine to facilitate the chemical reactions. An example is zinc dialkyl dithiophosphate (ZDDP). EP additives give boundary lubricating properties. * Boundary lubricants (e.g. stearic acid, C17H35COOH): Polar end-groups on the hydrocarbon chain bond to the surfaces, providing layers of lubricant molecules which slightly reduce direct contact between asperities on the surfaces. The lubricant film is very thin, so there is significant asperity formation, but the asperity junctions are weakened compared to normal. * ![Fig](images/lubrication_boundary.jpg) Other additives are detergents, antioxidants and dispersants. Detergents clean and neutralize oil impurities which would normally cause deposits. Antioxidants prevent oils from oxidising. Dispersants prevent contaminants from aggregating into larger groups that hinder the flow of the oil.      Regimes of lubrication ![](images/lubrication_stribeck.jpg) The Stribeck curve: the variation in the coefficient of friction with the dimensionless quantity η U/W for a lubricated bearing. Here, η is viscosity (dimensions of ML-1T-1), U is peripheral speed (dimensions of LT-1), of the bearing and W the load (per unit width) (dimensions of MLT-2/L = MT-2), carried by the bearing (after Hutchings and Shipway, 2nd ed., p. 90). A nice commentary on Stribecks work can be found in B. Jacobson, ‘The Stribeck memorial lecture, *Tribology International* **36**, 781-789 (2003). *Hydrodynamic lubrication**–* In this regime the surfaces are separated by a fluid film usually thick in comparison with the heights of asperities. The normal load is supported by the pressure within the film. This pressure is generated hydrodynamically. *Elasto-hydrodynamic lubrication / ‘Mixed lubrication* - Here, the separation of the surfaces is very low – for example, the load on a bearing has increased to bring it into this regime of behaviour because of local line or point contact. In steel components such as gears local pressures can be several GPa. Elastic deformation of bearing surfaces occurs and the oil film behaves almost like a solid – to a good approximation the viscosity of the oil, η, increases exponentially with the pressure, *P*. As η U/Wfalls, the lubricant film begins to break down and a sharp rise in friction ensues because of direct asperity interaction. Therefore, this is also a regime referred to as being of ‘mixed or partial lubrication. *Boundary lubrication* *-* This occurs at very low sliding speeds and/or high contact pressures. Steric forces between polar molecules either present naturally (e.g., in castor oil) or deliberately added to the oil, such as stearic acid or EP additives, prevent or limit contact between asperities of the two surfaces. Under the most favourable circumstances μ can be very low (e.g., » 0.001), so lubrication is certainly beneficial in reducing wear of materials. Wear - introduction = Wear is the deformation and removal of material from its original position on a surface as a result of mechanical action of another surface and/or particles. In general material is removed from a softer surface by a harder surface. There are two main categories for types of wear process, sliding wear and wear by hard particles. The distinction between the two is not sharp, and there will almost always be a degree of both occurring. Sliding wear occurs when two solid bodies slide over each other. One or both of the surfaces will suffer wear. An example is tyres in contact with road. Lubrication can dramatically reduce wear rates.  Strong interfacial bonds form across asperity junctions. When two dissimilar metals slide against each other the asperity junctions formed are stronger than the weaker of the two metals. This leads to the plucking out of fragments of the softer metal, giving rise to severe wear of the softer metal.  Wear by hard particles can be roughly broken down into abrasion and erosion. In three-body abrasive wear material is removed or displaced from a surface by hard particles rolling between two surfaces ((b) below). In two-body abrasion wear is caused by hard protuberances on one of the surfaces ((a) below). In erosion, wear is caused by hard particles striking the surface, either carried by a gas stream or entrained in a flowing liquid. More hard particles may be generated by this process, or by sliding wear, which can result in increasing rates of wear. Abrasion and erosion can be useful in some circumstances, for example grinding and polishing samples for metallographic examination. ![Fig](images/wear_types.jpg) Diagram illustrating abrasion and erosion (Tribology : friction and wear of engineering materials. Ian M. Hutchings, London : Edward Arnold, 1992, p. 133). Sliding wear , an equation that can be used to deduce the severity of sliding wear, from a simple model.> The Archard equation is \(Q = \frac{{KW}}{H}\) where Q is the total volume of wear debris produced per unit distance moved, H is the indentation hardness, W is the total normal load and K is a dimensionless constant of proportionality. From the above equation it is apparent that wear increases linearly with the contact load, K is a measure of the severity of wear and hard materials wear less than soft materials. There is little correlation between K and μ. Furthermore, the simple model does not tell us anything about the mechanism of material removal. Sliding wear – extent of wear: * Increasing the load leads directly to higher stresses, which results in greater wear. * Sliding velocity determines the relative rate of heat conduction away from the surface. At low sliding velocity, the heat generated (due to friction) will be relatively rapidly conducted away so the interface temperature stays low (isothermal). At high velocity, only limited heat conduction can occur, so interface temperature increases and the conditions are adiabatic. * High interface temperatures increases reactivity of the surfaces, causing rapid growth of oxide films. It also reduces the mechanical strength of asperities and may even cause melting in extreme cases. ![graph of load types](images/wear_graph_load_velocity.jpg) Sliding wear – mechanisms - Wear is a complex process involving a number of different mechanisms. The dominant mechanism depends on the conditions - this is shown on the graph below. ![Fig](images/wear_regimes.jpg)  Wear regime map for the sliding of steel on steel (from S.C. Lim and M.F. Ashby, ‘Overview no. 55. Wear-mechanism maps, Acta Metall. 35, 1-24 (1987)). This is similar for most metals. Eight distinct regimes are identified in this map: Regime I:                   Very high contact pressure. Gross seizure of the surfaces: catastrophic growth of the asperity junctions occurs, leading to the real area of contact becoming equal to the apparent area. Regime II:                 High loads and relatively low sliding velocity. Penetration of the thin native surface oxide film occurs, leading to high wear rates and metallic debris. Thermal effects are negligible as the sliding velocity is low. Regime III:                Lower loads than regime II, resulting in the oxide not being penetrated. Wear is mild because only oxide debris is formed. Regime IV:                High loads and sliding speeds. Melting occurs as frictional power dissipation is high and thermal conduction is ineffective at removing heat from the interface. The wear rate is high, with metal being removed as metallic droplets. Regime V:                 Low contact pressure but high sliding speed. The interface temperature is still high but below the melting point so surface oxidation occurs rapidly. Wear is mild because the debris is oxide. Regime VI:                Hot-spots at asperity contacts occur, causing local oxide growth. Wear debris is from this oxide layer spalling. Regime VII:              Metallic contact occurs at asperities (despite the ability of oxide to grow), leading to severe wear through the formation of metallic debris. Regime VIII:       Martensite forms at the interface through local heating of asperities followed by quenching through heat conduction into the bulk. This provides local mechanical support of the oxide film because martensite has a high strength, helping to reduce the degree of wear. Wear occurs by the formation of oxide debris. Boundaries on this map are not sharp – they are broad and there is overlap between the regimes. Wear by hard particles - abrasion and erosion = Factors affecting the rate of wear: - **Hardness** - particles with hardness lower than the surface cause little wear. **Shape** - angular particles cause greater wear than rounded particles. **Size** - larger particles cause more extensive wear as they carry more kinetic energy **Impact speed (for erosion)** - particles with greater speed cause more extensive wear as they carry more kinetic energy. **Impact angle (for erosion)** - particles hitting at angles close to perpendicular to the surface cause greater erosion. Abrasive wear: The particles are often larger than lubricant film thickness, so contact between the particles and the surface occurs, meaning lubrication does not reduce abrasive wear. Abrasive wear can arise either from plastic deformation forming a groove in a material or by brittle fracture. In brittle fracture, lateral cracks formed beneath a plastic groove produce chips which are subsequently removed from the surface. ![Abrasive wear](images/wear_ductile_vs_brittle.svg) Schematics of abrasive wear of (a) ductile material and (b) a material which is brittle.Materials with high hardness have low toughness (brittle), and visa-versa (ductile), so maximum wear resistance arises through a combination of intermediate values of hardness and toughness. ![Wear toughness](images/wear_toughness.png) Metals (tougher but less hard) suffer abrasive wear by plastic deformation, ceramics (less tough, but harder) by brittle fracture. Brittle fracture can be modelled through analogy with indentation of brittle materials. If the variables assumed are W (load), H (hardness) and Kc (fracture toughness): \[Q = A{W^p}{H^q}{K\_{\rm{c}}}^{ - r}\] with Q being the volume wear rate per unit sliding distance for a constant A, and with W, H and Kc raised to powers of p, q and -r respectively. Models used for predicting wear rates where brittle fracture is involved **predict wear rates higher than would be expected from plastic mechanisms.** These models also predict: * An increase in wear rate with size of the abrading particles * An inverse correlation between fracture toughness (raised to some power) and wear rate, to the extent that **fracture toughness is a more important material parameter than hardness** * A threshold load below which wear by brittle fracture will not occur Erosive Wear: - Material removal from each impact is very small but the collective damage can be significant. Variables which a simple model would expect to affect the volume of material, V, removed from an eroding surface of a plastically deforming material are the velocity, U, of the particles and their mass, m, (together in a kinetic energy term) and the hardness, H, of the material being eroded. Thus, from dimensional analysis, for this simple model, in which indentation-type behaviour is envisaged, we expect \[V = \frac{{Km{U^2}}}{{2H}}\] where K is a dimensionless constant. If we define erosion, E, as \[E = \frac{{{\rm{mass\;\; of \;\;material\;\; removed}}}}{{{\rm{mass\;\; of \;\;erosive\;\; particles \;\;striking\;\; the\;\; surface}}}}\] it follows that \[E = \frac{{V\rho }}{m} = \frac{{K\rho {U^2}}}{{2H}}\] where ρ is the density of the bulk material. This shows the importance of a high hardness of the surface being eroded for wear resistance. A model for erosive wear by brittle fracture would be similar but would show a high fracture toughness being more crucial for wear resistance. ![Image result for erosion](images/wear_hard_rock.jpg) An example of erosion, with sand particles wearing away at the rock. A second example is the Chiltern escarpment in Buckinghamshire in the south of England, a boundary between the hard chalk of the Chiltern Hills and the soft clay of the Vale of Aylesbury. Over geological time periods, the clay has worn away faster than the chalk, so that there is a noticeable slope (escarpment) between the valley and its neighbouring chalk hills. Summary = Friction and wear are key concepts. They are experienced in everyday life and can be either detrimental or useful. For example, friction is unwanted when pushing or dragging a heavy object along a floor. Wear can result in components failing and no longer being fit for purpose, e.g., drill bits. However, high frictional forces are desirable for car brakes and grinding away the surfaces of metallographic specimens is most easy when there are reasonably high wear rates Questions = ### Quick questions*You should be able to answer these questions without too much difficulty after studying this TLP. If not, then you should go through it again!*1. Which of the following correctly describe lubricants? (You may pick more than one answer) | | | | | - | - | - | | | a | Decrease μ | | | b | Decrease the extent of wear | | | c | Are always liquids | | | d | Always completely separate the surfaces / prevent asperity formation | 2. μ depends on… (You may pick more than one answer) | | | | | - | - | - | | | a | Apparent area of contact | | | b | Normal load | | | c | Sliding velocity | | | d | All of the above | 3. The Archard equation is useful because it provides a measure of… (You may pick more than one answer) | | | | | - | - | - | | | a | The severity of wear | | | b | Viscosity index | | | c | Surface roughness | | | d | The shear strength of a lubricant | 4. When skis move over snow sliding takes place over a thin film of water on top of the snow giving a low μ. This is because | | | | | - | - | - | | | a | a pressure-induced solid → iquid phase transformation occurs | | | b | snow melts because of the dissipation of heat due frictional work | 5. The coefficient of friction between two given materials is constant. | | | | | - | - | - | | | a | True | | | b | False |### Deeper questions*The following questions require some thought and reaching the answer may require you to think beyond the contents of this TLP.*6. Geckos are able to climb vertical walls. Can they do this using only frictional forces? Going further = ### Books * *Tribology: Friction and Wear of Engineering Materials*, 2nd edtion by I. Hutchings and P. Shipway Provides much more in-depth discussion of all of the topics discussed in this TLP. * *Fracture of brittle solids* by Brian Lawn Contains useful diagrams and a good explanation of brittle fracture. * *Introduction to Surface Engineering* by P.A. Dearnley A good introduction to the way in which surfaces can be engineered to take account of friction and wear.### Websites and its associated links , the website of the Society of Tribology and Lubrication Engineers , a Dutch-based web site on tribology authored by Prof Anton van Beek of Delft University of Technology.  
Aims On completion of this TLP you should: * Understand the hierarchical structure of wood. * Be aware of the differences between hardwoods and softwoods. * Understand how to calculate the stiffness and strength of wood. * Be aware of waters effect on wood. * Be aware of why wood has certain mechanical properties, and how these can be used for a variety of applications.   Before you start * You should understand the basic mechanics of composite materials. * You should understand the derivation for the deflection of a beam under symmetrical 3-point bend testing. * You should understand the derivation for the ultimate tensile stress or strength of a material measured under symmetrical 3-point bend testing.   Introduction <! .style3 {font-family: Symbol} >Wood is the oldest and one of the most commonly used engineering materials in the world. The earliest evidence for a domestic structure in Britain was that of a tent-like structure made with wooden supports dating to 7000 BC, and wood is the most commonly used building material to this day. Worldwide, 109 tonnes of wood are used per annum, comparable to the consumption of iron and steel. Wood is so widely used because of its low cost, per tonne 1/60th that of steel, and high specific strength (high value of the ). Wood also combines high stiffness and high toughness. As wood is a renewable resource, it is a good material from an environmental perspective and its production requires only a low energy input. One mature tree supplies enough O2 gas for 10 people but we are consuming in the UK 12 trees per person per year. We must therefore think carefully about the environmental impact of using large quantities of wood. Wood is a fibre-composite material (*cellulose* fibres in a *lignin* matrix) with complex overall structure. Wood is a cellular material. Cells form the basic unit of life and are immensely complicated. There are roughly 1012 cells of 4 main types in a tree. Cells display a great deal of self-organisation and assembly. Additionally, the constituents of a tree undergo continuous renewal, making a tree a dynamic system. In this TLP you will learn how the structure of the tree trunk is specially adapted for its functions: to support the leaf canopy, to transport mineral solutions via conduction, and to store food in the form of carbohydrates. Wood has two types: *softwoods* and *hardwoods*. There is however little correlation between the type of wood and its properties: some hardwoods are very soft! This TLP discusses the mechanical properties of wood, and explains woods generally high strength under tension. Wood also shows properties of high toughness and stiffness. These values vary greatly depending on the type of wood and the direction in which the wood is tested, as wood shows a high degree of . Woods properties are also strongly affected by the amount of water present in the wood. Generally, increasing the water content of wood lowers its strength.   The structure of wood (I) =<! function MM\_openBrWindow(theURL,winName,features) { //v2.0 window.open(theURL,winName,features); } //> <! .style1 { color: #333333; font-size: small; } >The basic unit of wood structure is the plant , which is the smallest unit of living matter capable of functioning independently. The cell has many functions, such as the manufacture of proteins, polysaccharides and mineral deposits. A plant cell varies in diameter from 10–100 μm. The main difference between the plant and animal cell is that plant cells have a outside the plasma membrane, which is 0.1 to 100 μm thick. This makes the cells rigid, among other effects prohibiting the locomotion typical of animals. The cell wall supports the cell membrane, as internal pressure in the cell can be as high as 1 MPa. The plasma membrane acts as a selective barrier enabling the cell to concentrate the nutrients it has gathered from its environment while retaining the products synthesized within the cell for its own use. It is also able to excrete any waste products from the cell. The membrane is formed from amphipathic molecules i.e. one end is hydrophilic (water liking) and the other end is hydrophobic (water disliking). The nucleus is the most prominent in cells and contains the genetic information (DNA) necessary for control of cell structure and function. In the cell the endoplasmic reticulum synthesises proteins and the Golgi apparatus sorts them; the are then stored within the fluid *cytosol*. Chloroplasts contain energy-converting systems that make ATP by capturing and using the energy from sunlight. Mitochondria produce ATP from larger energy-storage molecules, such as glucose. Finally the vacuoles can store nutrients and waste products, increase the cell size if necessary, and control . ![](figures/plant_cell.jpg) A plant cell An extracellular matrix called the cell wall, which acts as a supportive framework, surrounds the plant cell. It is made of a network of microfibrils embedded in a matrix of and , which are examples of Cellulose is a polymer of 8,000 to 10,000 monomers of anhydroglucose in the form of a flat 6‑membered ring. The individual polymers are aligned in parallel and cellulose is up to 90% crystalline. Cell secretions form the matrix, and cellulose and lignin comprise the bulk of a trees biomass. ![](figures/cellulose.png) The structure of cellulose The tubular cell wall has a layered structure:![](figures/cell_wall_schematic.png) Cell wall schematic Further cells are aligned parallel to the cell shown. The middle layer is the thickest and most important, and the orientation of the cellulose microfibrils is significant. The orientation of the microfibrils has only been shown for this layer. The cell wall is approximately 50% cellulose fibrils. To toughen the structure, the fibrils are aligned at 10 to 30° to the tree trunk axis in the middle layer of the cell wall. The open space in dry wood is approximately 50%, but can be as high as 92% in balsa wood. In green wood (freshly cut timber with over 19% moisture content) the amount of open space is less different, as some of the space is filled with water. The structure of wood (II) <! function MM\_openBrWindow(theURL,winName,features) { //v2.0 window.open(theURL,winName,features); } //> <! .style1 { color: #333333; font-size: small; } .style3 {color: #ECE9D8} > Wood has extreme *anisotropy* because 90 to 95% of all the cells are elongated and vertical (i.e. aligned parallel to the tree trunk). The remaining 5 to 10% of cells are arranged in radial directions, with no cells at all aligned tangentially. The diagram below shows a cut-through of a tree trunk: ![](figures/tree_trunk_cut_through_sml.png) A cut-through of a tree trunk In the trunk there are three main sections, the *heartwood*, which is physiologically inactive, the *sapwood*, where all conduction and storage occurs, and the *bark*, which protects the interior of the tree trunk. The two main types of tree, *softwoods* and *hardwoods*, have distinct internal structures. *Coniferous* trees are softwoods, with vertical cells, , 2 to 4 mm long and roughly 30 μm wide. These cells are used for support and conduction; they have an open channel and a thin cell wall: ![](figures/tracheid_cell.png) Cross-section of tracheid cell typical of a softwood The storage cells, , are found in the radial direction. Scots pine is an example of a softwood tree. Below is shown a 3D model of the trunk interior of Scots pine made from micrographs of sections cut in the tangential, radial and transverse planes: *Broad-leaved* trees are called hardwoods. The vertical cells in hardwoods are mainly , which are 1 to 2 mm long and 15 μm wide. These are thick-walled with a very narrow central channel and are for support only. ![](figures/fibre_cell.png) Cross-section of fibre cell found in hardwoods These cells are unsuitable for conduction, and so the tree needs for this purpose. Vessels are either xylem, which are dead cells that carry water and minerals, or phloem, which are live cells and transport energy sources made by the plant. Vessels are 0.2 to 1.2 mm long, open-ended and are stacked vertically to form tubes of less than 0.5 mm in diameter. Hardwoods also have a small number of tracheid cells, and parenchyma cells are still present radially for storage. Both balsa and greenheart wood are examples of hardwoods. Below is shown a 3D model of the trunk interior of greenheart made from slides taken in the tangential, radial and transverse directions: The structure of wood (III) = <! .style1 { color: #333333; font-size: small; } > The structure of the tree trunk has now been discussed at both the cellular and macroscopic scale. At the level of the complete structure, there is a further point of interest: the tree is pre-stressed. The centre of tree trunk is in compression, and the outer layers are in tension. The stressing is achieved as the inner sapwood shrinks as it dries and becomes heartwood. As the heartwood has lower moisture content it is better able to resist compression. ![](figures/comp_ten_trunk_sml.png) Regions of tree trunk in compression and tension Try the interactive tree bending demonstration below. Compare the bending of the pre-stressed and not pre-stressed trees in a strong wind, paying attention to the graphs showing the areas of the tree trunk in tension and compression. When a tree grows, the new cells grow at the edge of the tree from the vascular cambium. At the beginning of the growing season, in spring, the cells that grow are large due to the greater amount of moisture available. Throughout summer, the moisture available decreases and the cells also decrease in size as a result. By winter cells can no longer grow, and cells at the edge of the sapwood region near the central heartwood dry out and die. This sequence is evident as annual growth rings. This process is used to date trees by *dendrochronology*. In a good growing year, the growth ring will be wider than that in a bad growing year. By working out the sequence of good and bad years it is possible to match this sequence to the tree, as long as it is more than fifty years old when felled, and hence find the age of the tree. Close examination of the last growth ring then pinpoints the actual season that the tree was cut down. This technique was used to date the oldest-known timber track-way in the world, Sweet Track in the Somerset levels, to the winter of 3807 to 3806 BC. Stiffness of wood = <! .style1 { font-family: "Times New Roman", Times, serif; font-style: italic; } .style2 {font-family: "Times New Roman", Times, serif} .style3 { font-family: Symbol; font-style: italic; } .style4 { color: #333333; font-size: small; } > <! function MM\_openBrWindow(theURL,winName,features) { //v2.0 window.open(theURL,winName,features); } //> The stiffness of wood can be measured using a simple three-point bend test as shown below: ![](figures/3_point_bend.png) Three-point bend test set-up The width (*w*) and height (*h*) of wood samples are measured, and the specimens are placed in a three-point bend testing apparatus with the height of the wood oriented vertically in the apparatus. The distance (*L*) between the two supports is also measured. The deflection of the middle of the beam, as a function of load on the pan of the apparatus, is measured to calculate the stiffness. As the *elastic* properties of wood are being tested it is important to ensure that the sample does not become permanently deformed. To achieve this, the mass on the pan is increased stepwise in 100 g increments, ensuring that the deflection remains less than 3 mm, until a total mass of 600 g is reached. No load is added until the deflection caused by the previous load added has stabilised, and the equipment is not jogged or tapped, as these actions affect the results recorded. Your browser does not support the video tag. Video of three-point bend test for Scots pineThe resulting load (m) – displacement (d) curves on loading and unloading for (a) balsa, (b) Scots pine and (c) greenheart are shown below: ![](figures/balsa_edited_plot_sml.png)     (a) ![](figures/scotspine_edited_plot_sml.png)     (b) ![](figures/greenheart_edited_plot_sml.png)     (c) Using the equation for the deflection of a material under symmetric three-point bending: \[\delta = \frac{{mg{L^3}}}{{48EI}}\] The Youngs modulus for each sample is calculated from: \[E = \frac{{g{L^3}}}{{48I}}\frac{m}{\delta }\] Therefore from the graphs, the gradient is \(\frac{m}{\delta }\) and the other values have been measured as in this case \(I = \frac{{w{h^3}}}{{12}}\), so the Youngs modulus can be found. The results for balsa, Scots pine and greenheart are: | | | | | | - | - | - | - | | | Greenheart | Balsa | Scots pine | | *E* (GPa) | 16.4 ± 0.7 | 6.7 ± 0.3 | 13.5 ± 0.7 | | Textbook values *E* (GPa) | 21 | 3.2 | 10 | The textbook values represent a broader range of samples. Clearly the definition of softwood and hardwood has little relation to the woods materials properties: the softwood Scots pine is much stiffer than the hardwood, balsa. This is mainly due to the ultra low density of balsa, as the stiffness (and also strength) of wood correlates with density. The values of Youngs modulus show that wood is reasonably stiff. Wood is a composite material, and so to stretch the wood samples the cellulose microfibrils in the wood have to be stretched. The Youngs modulus of cellulose fibrils is 100 GPa, and that of lignin and hemicellulose averages to 6 GPa. Under axial loading, an equal strain condition applies and the Youngs modulus of the wood cell wall can be calculated as follows: E wood cell wall = (1–f) Ecellulose + f Elignin-hemicellulose matrix                        = 0.5 (100) + 0.5 (6) = 53 GPa Clearly, the Youngs modulus of the cell wall is a lot higher than that of wood, as the cells and spaces in the wood filled by air or water also affect woods Youngs modulus, decreasing its value. However wood cell wall has measured values of Youngs modulus of 10 to 60 GPa, and so the composite model of the cell wall provides an accurate mechanical description of its behaviour. The loading and unloading curves do not exactly coincide. This demonstrates that wood shows properties under deformation. Viscoelasticity is advantageous, not least because it dampens vibrations: in high winds damping of resonance protects the branches and trunk from excessive deflections associated with damage. A stiff material could also limit deflections, but at the expense of high stresses. Overall, it is preferable to be able to bend. The origins of woods viscoelastic behaviour lie in the lignin matrix. Lignin is an amorphous polymer, and its elastic regions respond instantly to the strain while the viscous regions respond more slowly. Due to this viscoelasticity, energy is dissipated in the wood on loading. On the graphs the area between the loading and unloading curves shows the elastic strain energy that is being stored in the wood. However the amount of energy is not high enough to cause problems. In living trees, in particular, the high water content of the wood inside the cells and extracellular matrix restricts a significant temperature rise, because of the high heat capacity of water. Strength of wood <! .style1 { font-family: "Times New Roman", Times, serif; font-style: italic; } .style2 { color: #333333; font-size: small; } > The strength of wood can also be measured using a three-point bend test. The width (w) and height (*h*) of wood samples are measured, and the specimens are placed in the three-point bend testing apparatus with the height of the wood orientated vertically in the apparatus. The distance (*L*) between the two supports is also measured. The wood samples are again loaded in 100 g increments. If the micrometer needle continues to move after a 100 g load has been added to the pan, the reading is allowed to stabilise before further mass is added. The mass on the pan is increased in this way until the sample fails. At this point the load and deflection of the sample before failure are noted. Your browser does not support the video tag. Video of three-point bend test of greenheartBy following this method and repeating for three samples of balsa, Scots pine and greenheart the following results were obtained: | | | | | | - | - | - | - | | Wood tested | Greenheart | Scots pine | Balsa | | 1 | 2 | 3 | 1 | 2 | 3 | 1 | 2 | 3 | | w (mm) | 3.0 | 3.4 | 3.0 | 3.4 | 3.1 | 3.6 | 3.8 | 3.7 | 3.7 | | h (mm) | 3.1 | 3.6 | 3.4 | 3.6 | 3.4 | 3.4 | 3.7 | 3.6 | 3.6 | | L (mm) | 90 | | Maximum mass (kg) | 4.7 | 7.9 | 5.8 | 2.9 | 3.3 | 3.0 | 2.5 | 0.8 | 1.0 | | Maximum deflection (mm) | 6.05 | 5.75 | 6.80 | 3.59 | 6.06 | 4.60 | 2.50 | 4.10 | 4.00 | | Strength (MPa) | 215.9 | 237.4 | 221.5 | 87.2 | 120.4 | 95.5 | 63.6 | 22.1 | 27.6 | | Average strength (MPa) | 225 | 101 | 38 |   The error in the individual strength results can be calculated using the standard deviation in the strength measurements. The error in the average value of strength is then found by dividing this error by ![](eqn/eqn_strength/eq0001M.gif), where *N* is the number of strength measurements taken. The errors in measurements are: | | | | | | - | - | - | - | |   | Greenheart | Scots pine | Balsa | | Standard deviation (MPa) | 11.2 | 17.3 | 22.6 | | Error in average strength (MPa) | 6.5 | 10.0 | 13.0 | To calculate the strength the following equation is used: Strength =  \(\frac{{3mgL}}{{2w{h^2}}}\) The results for balsa, Scots pine and greenheart are as follows: | | | | | | - | - | - | - | |   | Greenheart | Scots pine | Balsa | | Strength (MPa) | 225±7 | 101±10 | 38±13 | | Textbook values (MPa) | 181 | 90 | 23 | The textbook values reflect a wider range of samples. The large differences between our experimental results and the textbook values can also reflect errors in the weights, distance between the supports, and *w* and *h*. Three-point bending is not a very accurate method for strength testing, as the force is concentrated at one point in the material. The high error also shows the natural variability of wood within a species, and even within a tree. Wood performs well under uniaxial tension, due to the high strength of the cellulose microfibrils. Wood is a lot weaker in compression, as the cells can collapse. Buckling of the cell walls occurs first in the vertical cells at the point where they are deflected by the rays (radial cells). This leads to creases, which can act as cracks in the wood when tension is applied. For this reason, diving boards break if turned over. The unseen crease that was in compression on the underside of the board is put in tension when turned over, causing the board to fail. On bending of wood, gradual crushing occurs on the compression side of the beam, transferring load to the tension side (lower side in our three-point bend tests). Trees have evolved to avoid this problem by being in a . As the outer layers of the tree trunk are in tension normally, on bending the compressive side of the trunk can avoid going into an absolute state of compression. ![](figures/comp_failure.png) Compressive failure in wood The wood samples fail by crack propagation across the lower surface of the samples, which are under tension. A simple way of explaining the high failure stress (strength) of wood is to say a fibre pull-out mechanism occurs on failure. As it is the fibres that must be broken for the sample to fail, the strength of the sample depends mostly on the strength of the fibres within the wood. Cellulose fibres are quite strong, so wood also has reasonable strength, and very high specific strength due to its low density. However, as we will see later, the fibre pull-out mechanism cannot completely explain the high strength of wood.   Water's effect on the mechanical behaviour of wood <! .style1 { color: #333333; font-size: small; } .style3 {font-size: small} > The mass of water in a freshly felled tree is 60 to 200% of the dry mass of the tree. In dried out timber there is only roughly 10 weight percent water content. However timbers tend to achieve equilibrium with the surrounding air, settling to a moisture content of 22 to 23% in moist, water-saturated air. The effect of water on wood must therefore be considered. Combining and repeating the previous two experiments with the three-point bending equipment can help to demonstrate the effect. Some wood samples are soaked in water for 24 hours. This should ensure that they have a similar level of water content as green (newly felled) timber. The deflections of the wood samples are noted as the mass on the pan is increased in 100 g increments up to 600 g in order to calculate the Youngs modulus of the wood. The mass is then increased further until the failure load is reached. At this point the failure load and maximum displacement of the beam centre are noted. This should allow a measurement of the strength of the wet wood samples. Your browser does not support the video tag. Video clip showing a three-point bend test to measure the deflection and failure load of a wet balsa sampleThe stiffness and strength of the wet samples are worked out using the methods shown previously. By following this method and repeating for three samples of balsa, Scots pine and greenheart the following results were obtained: | | | | | | - | - | - | - | |   | Greenheart | Scots pine | Balsa | | Stiffness (GPa) | 16.1±1.7 | 6.0±0.7 | 2.2±0.7 | | Strength (MPa) | 112±3 | 47±4 | 11±3 | Evidently increasing the water content of wood by soaking wood samples in this way lowers the stiffness and strength of the wood. When dry timber has its water content increased to the levels found in green timber, the cell walls fill with water. This causes the cell walls to expand and a dimensional change occurs. Waters presence dramatically softens the cell walls. The hydrogen bonds between different polymer chains in the crystalline cellulose microfibrils can break. Hydrogen bonds form with water instead, as it is a small, polar molecule and so can get in between the polymer chains. Stronger hydrogen bonds are formed between cellulose and water than between cellulose and cellulose, making hydrogen bonding with water more favourable. This softens the cellulose microfibrils as they are no longer so strongly bonded to each other, making it easier to untangle and hence stretch the fibres. This leads to a decrease in the stiffness of wood. As water is expanding the cell wall, there are also fewer cellulose microfibrils per unit area. Hence the strength of the wood decreases as, for a given applied stress, the load per fibre is greater. This makes the fibres more likely to break, leading to a crack in the wood sample, causing earlier sample failure. The graph below shows how the compressive strength of a sample of changes as the water content increases. Under compression, there is a very marked weakening effect as water reduces the bonding between fibres, making cell walls easier to buckle. ![](figures/stess_moisture_plot.png) Longitudinal compressive strength of timber as a function of its moisture content [].   Wood as an engineering material =<! .style1 { color: #333333; font-size: small; } .style3 {color: #333333} >Wood has many advantages as an engineering material. For example, its high toughness is due to the cellulose microfibrils present in a matrix of lignin and hemicellulose. As wood is a fibre composite, its toughness can be analysed in terms of a fibre pull-out mechanism of failure. For a typical commercial wood a fibre pull-out mechanism of failure would predict a value of *Gc* (toughness) of 1.5 kJ m-2, whereas in fact the measured value is 15 kJ m-2. The extra toughening is due to the helical winding of cellulose microfibrils in the cell wall, offset at 10 to 30° to the trunk axis. Because of this offset, the axial modulus of the wood is decreased but there is a great increase in toughness. On failure, the middle layer of the cell wall parallel to the fibrils cracks first. This leads to a decrease in the diameter of the layer, causing it to separate from the outer layer of cell wall and fold inwards. An enormous absorption of energy results, leading to woods high toughness. On bending, splitting also occurs parallel to the grain and ahead of the crack, blunting the crack. High toughness is therefore imparted to the wood as it reduces the force concentration at the tip of the crack. The progression of the crack may be stopped or at least slowed down, increasing the amount of work needed to reach breaking point. Other advantages of using wood as an engineering material include: * the low energy content needed for production, * the low cost of production, * wood is an environmentally friendly material, * wood is a renewable material. When trees grown in sustainable forests are cut down, more trees are planted, keeping the trees from extinction and maintaining the levels of oxygen production by living trees. * wood has a very high specific strength due to its low density and reasonable strength, * woods low density also makes it easier to transport, * there are very low costs associated with the disposal of wood, * wood is not electrically conductive, * most woods are non-toxic, * wood is low in thermal conductivity, * nails and screws do not measurably weaken wood, if put in with care,  showing that wood is very resistant to stress concentrations. However wood also has disadvantages as an engineering material which generally stop its use as a high-tech material. These include: * there is large variability in properties between species and, depending on growing conditions and the position of the wood within a trunk, within a species. * wood is dimensionally unstable, as water changes its dimensions. * woods strength decreases when wet. * time-dependent deformation such as creep and viscoelasticity occur in wood. Creep of wood makes it important that longbows or violins are not left tightly strung. Creep occurs due to movement of the non-crystalline (amorphous) sections of the cellulose microfibrils. * wood is highly combustible. * wood is susceptible to termites, woodworm and infestations. * wood cant be used at high temperatures. * wood is susceptible to rot and disease. * wood is highly anisotropic, although this can be limited by the use of plywood. Plywood involves assembling layers of wood with orthogonal grain orientation, decreasing the anisotropy. Despite these disadvantages wood is the most commonly used building material in the world. It is used to make houses, furniture, cricket bats, longbows and was in the past used for wheel rims and hubs, among much else. ![](figures/longbow.png) Longbow In longbows yew wood is commonly used. Yew was used to make bows as long ago as 3500 BC, from which time a bow was found in the Somerset Levels. Such bows could shoot an arrow over a hundred metres. Medieval longbows, such as those used by the English against the French at the Battle of Agincourt in 1415, could shoot effectively as far as 220 m. The longbow was used by the English in battle for roughly 400 years, being treated until 1662 AD as a military weapon. The bow was also a successful hunting weapon. Yew wood is hard, dense and finely grained. A region of the tree bordering both the sapwood and heartwood regions is used to make the bows. The sapwood can withstand the tension produced on drawing the arrow and so acts as a backing to the bow. On the other hand, the heartwood will endure the compression occurring on the inner edge of the bow. ![](figures/roof_trusses.png) Roof trusses Wood is also generally used to make trusses on which to build house roofs, still used today in most houses, as other alternatives, such as steel, are too expensive. For this application, the primary consideration is cost, with the wood needing to be cheap as reasonably large quantities are used. It is also useful to choose a wood that will not easily succumb to rot, disease or infestation. The wood must be strong in order to carry the weight of the roof and allow trusses that span greater distances. However, it must also be light for easy transport and manufacture of the roof, and so that no unnecessary weight is placed on the walls of the house. Spruce and pinewoods are often used as they are easy and quick to grow, and hence cheap. They can adapt to a wide variety of growth conditions and are widespread in North America and Europe, making them widely available.   Summary = In this TLP the structure of wood has been studied. You will have learnt that hardwoods contain vessels and fibres whereas softwoods do not. All trees are pre-stressed, and how and why this occurs has been discussed. You should be aware of how the strength and stiffness of different types of wood can be calculated. You will have seen that different woods have different properties, but that it is possible to understand their general behaviour using composite material models. Wood shows viscoelasticity and has different properties when wet. You should now be aware why these properties occur, and how they affect the material. You should also understand why wood is commonly used as an engineering material, and the disadvantages of its use for particular applications. Questions = ### Quick questions*You should be able to answer these questions without too much difficulty after studying this TLP. If not, then you should go through it again!*1. What is the main difference between hardwoods and softwoods? | | | | | - | - | - | | | a | Hardwoods are harder than softwoods. | | | b | Softwoods have areas of tension and compression in their tree trunks. | | | c | Hardwoods have vessels and fibre cells. | | | d | Softwoods contain cellulose, lignin and hemicellulose. | 2. How do wood's materials properties change when wet? | | | | | - | - | - | | | a | Wood gets stiffer. | | | b | Wood gets weaker. | | | c | Wood gets stronger. | | | d | Wood shrinks. | 3. What deformation characteristic does wood show on three-point bend testing? | | | | | - | - | - | | | a | Fibre pull-out. | | | b | Hookean elasticity. | | | c | Crack propagation. | | | d | Viscoelasticity. | 4. By what mechanism does wood fail on loading during three-point bend testing? | | | | | - | - | - | | | a | Fibre pull-out. | | | b | Brittle fracture. | | | c | Ductile fracture. | | | d | Elastic yielding. | 5. Which of the following is **not** present in a plant cell? | | | | | - | - | - | | | a | Vacuole. | | | b | Hemicellulose. | | | c | Chloroplast. | | | d | Mitochondrion. | 6. Which region of the yew tree is used to make longbows? | | | | | - | - | - | | | a | The cambium. | | | b | The region bordering both the bark and the cambium. | | | c | The region bordering both the sapwood and the heartwood. | | | d | The region bordering both the cambium and the sapwood. |### Deeper questions*The following questions require some thought and reaching the answer may require you to think beyond the contents of this TLP.*7. Below is shown a 3D model of the trunk interior of balsa made from slides taken in the tangential, radial and transverse directions. Identify the key features on this 3D balsa wood sample: 8. Calculate the percentage of elastic strain energy that is stored in a loading-unloading cycle for a wood sample.![](figures/scotspine_question.png) 9. Calculate the stiffness of this wood sample given the previous graph of loading and unloading for Scots pine. 10. (…continued from previous question) Now calculate the strength of the wood sample using the following results:| | | | - | - | | *L* (cm) | 9.0 ± 0.1 | | *w* (mm) | 3.20 | | *h* (mm) | 3.30 | | mass (kg) | 1.7 ± 0.05 |Use *g* = 9.81 m s-2 11. (...continued from previous question) What are the errors in your calculated values of stiffness and strength? 12. What wood, out of the following, would be used to make a cricket bat blade?a     Willow: Tough, light and resilientb     Cane: Light and springy.c     Birch:  Inexpensive, tough and heavy.d     Elm: Very easy to bend, good durability and easy to work. Going further = **Books:** * Steven Vogel, *Comparative Biomechanics,* Princeton University Press, 2003. * J.E. Gordon, *Structures: Or Why Things Dont Fall Down,* Penguin Books, 1978. * J.E. Gordon, *The New Science of Strong Materials: Or Why You Dont Fall Through the Floor,* Penguin Books, 1968. **Papers:** [1] J.M. Dinwoodie, *Timber – A Review of the Structure-Mechanical Property Relationship,* Journal of Microscopy, vol.104, pt.1, May 1975, pp.3-32. **Websites:** * Website describing the explosive properties of greenheart. * Website describing a brief history of Balsa and its use. * Website describing the history of the Mosquito wooden aeroplane. * Use of plywood in plane construction. * Short history of archery. * Brief explanation of dendrochronology and its uses. * How cricket bats are made.  
Aims After this TLP. You should be able to: * Understand how new dislocations are generated from a Frank-Read source * Calculate the critical shear stress required to operate a Frank-Read source * Describe the interactions between two dislocations * Explain the origin of solute-solution strengthening in terms of the interactions between the strain fields of dislocations and solute atoms * Describe the formation of Lomer lock * Understand how jogs and kinks are formed, and their significance in work hardening * Understand the significance of Frank-Read source, Lomer lock, jogs and kinks in forest hardening * Describe stage 1 and 2 in the deformation of a single crystal * Explain the process of grain boundary hardening in a poly-crystal Before you start This TLP assumes that you have some basic knowledge on dislocation theory. However, the following TLPs could be helpful: (Basic information about dislocations) (Information about slip in an fcc metal, including stage I and II of its deformation) Introduction This TLP concerns the topic of crystal plasticity. Plasticity is defined as: ***The*** ***deformation of a (solid) material undergoing non-reversible changes of shape in response to applied forces***. ![True stress-strain curve for a metsl](images/intro.svg) Figure 1: The true stress-strain curve of a crystal. During plastic deformation, the total strain of a metal is the sum of the elastic and plastic strain. However, the elastic strain is typically less than 1%, which is much less than the plastic strain. This can be seen from the nearly vertical elastic region in the true stress-strain curve in Figure 1. Crystals undergo work hardening when they are deformed plastically. Work hardening, also known as strain hardening, describes the increase of the stress level necessary to continue plastic deformation. It arises as mobile dislocations are impeded by jogs and Lomer locks as the crystals are deformed. From Figure 1, it can be seen that the work hardening rate (the gradient of the true stress-strain curve) decreases progressively with increasing strain and eventually approaches a plateau. This is due to the **competing effect between the generation of new dislocations (as more Frank-Read sources are operated), the resistance from jogs, locks and tangles, and the processes which allow them to be organised and to annihilate each other (climb and cross-slip).** This TLP explores the plasticity of crystals by first introducing some of the aspects of single crystals. These include the Frank-Read source, dislocation interactions, the formation of Lomer lock and jogs, and the process of climb and cross-slip. This will be followed by an example on the deformation of a single crystal, where the significance of these aspects in stage I and II is discussed. Finally, it will discuss the deformation of poly-crystals by focusing on grain boundary hardening. Dislocation generation The Frank-Read source - In order to explain the plastic behaviour of a single crystal, a mechanism by which dislocations are generated must be formulated. Such a mechanism is realised on the basis of the two experimental observations: 1. Surface displacement at a slip band is due to the movement of about 1000 dislocations over the slip plane. The number of dislocation sources initially present tin a metal could not account for the observed slip-band spacing and displacement unless there were some way in which each source could produce large amounts of slip before it became immobilized. 2. If there were no source generating dislocations, cold-work should decrease, rather than increase, the density of dislocations in a single crystal. The mechanism by which dislocations are generated (multiplied) was proposed by Frank and Read in the 1950s and is known as the Frank-Read source. Below is an animation which explains how dislocations are generated from a Frank-Read source.  Animation captions: 1. When a tensile stress is applied to a single crystal, the shear stress exerts a force \( F = \tau{ \*b} \) on the dislocation line, which is pinned at both ends. This could occur if the two ends were nodes where the dislocation in the plane of the paper intersects dislocations in the other slip planes, or the pinning could be caused by existing precipitates 2. The shear stress causes the dislocation line to bow outwards, balancing the line tension and the force due to the applied shear stress. 3. The shear stress reaches a maximum when the segment becomes a semicircle. 4. The dislocation segment continues to expand until the ends annihilate each other (as they have opposite burgers vectors), forming a dislocation loop. 5. The loop continues to expand, and a new dislocation line is formed. The process can repeat itself sending out many loops. Here is a TEM video showing a real Frank-Read source in action: Your browser does not support the video tag. The minimum stress required to operate a Frank-Read source ![Frank-Read diagram](images/Frank-Read_diagram.svg) Figure 2: Force diagram of a Frank-Read source. The dislocation segment is pinned at both ends by forest dislocations. The force on the dislocation line, which has a distance d between the pinned ends, when a shear stress τ is applied is \( F = \tau{ bd} \). This force is balanced by the line tension (energy/length) of the dislocation, which is ≈ Gb2. At the pinning ends, the vertical component of the force is 2Gb2sin(θ). This force reaches a maximum 2Gb2 when the dislocation is bowed into a semicircle (θ) = 90). Hence, the minimum stress required to operate the Frank-Read source is when τbd = 2Gb2, i.e. \[ \tau = 2\frac{{Gb}}{d} \] Since the distance d between the pinned ends is related to the dislocation density ρ by \( \rho = 1/{d^2} \),  the minimum stress can be written: \[ \tau = 2Gb\sqrt \rho \] The simulation below allows you to explore the effect of changing each parameter in the above equation on the minimum stress. Note that d is changed by changing the (forest) dislocation density (here we model the forest dislocations as pinning sites). Dislocation interactions Dislocation-dislocation interactions Around a dislocation, the atoms are displaced from their normal positions. The atomic displacements are equivalent to that caused by elastic strains arising from external stresses. For example, the extra half plane of the edge dislocation puts the region above the slip plane into hydrostatic compression, whilst the region below the slip plane goes into tension. When dislocations move under a tensile stress, their stress fields interact. As the elastic strain energy is proportional to the square of the local strain, it is energetically favourable for the stress fields to configure themselves to minimise this strain. The resultant configuration depends on the sign of the two dislocations interacting with each other. Dislocations of the same sign have the same burgers vector. Conversely, dislocations of opposite sign have opposite burgers vectors. (For more information about Burgers vector, see ). Below is an animation showing the attraction and repulsion of dislocations on the same and different slip plane Dislocation-solute atoms interactions - Dislocations also interact with the solute atoms in a crystal. The solute atoms can be either interstitial or substitutional. The stress field created by the solute atom is spherically symmetric. The stress field is compressive if the solute atom is larger than the lattice atoms. On the other hand, the stress field is tensile if the solute atom is smaller than the lattice atoms. The spherical symmetry of the stress fields induced by the substitutional solute atoms means the fields contain no shear stress component. Hence, they do not interact with screw dislocations, which are pure shear dislocations (i.e. screw dislocations have no hydrostatic tension or compression). However, the stress fields created by substitutional solute atoms will interact with the stress fields of edge dislocations. This will lead to favourable relative arrangements of dislocations and solute atoms, such that the strain energy is minimised. ![Solution strnegtheneing](images/solution_strengthening.jpg) Figure 3: (Left) Larger solute atoms in the tensile field of an edge dislocation and (Right) smaller solute atoms in the compressive field of an edge dislocation. (Source of image: N. Jones, Course E: Mechanical Behaviours of Materials, Part 1A (2016), p. 60) The interaction between dislocations and solute atoms leads to solute solution strengthening. If an edge dislocation interacts with a solute atom which is larger than the lattice atoms, the solute atom will reside below the extra half plane of atoms to relieve the hydrostatic tension (Figure 3). This causes the two stress fields to repel each other. On the other hand, if an edge dislocation interacts with a solute atom which is smaller than the lattice atoms, the solute atom will sit at the end of the extra half plane to relieve the hydrostatic compression. In both cases, an energetically favourable arrangement of the dislocation and the solute atom is formed, which tends to persist as it minimises the energy of the system. This effect retards dislocation motion and a greater shear stress is required to move the dislocation from this configuration than was necessary to move through the host lattice, which gives rise to solute solution strengthening. Sessile dislocations Lomer lock in fcc - A Lomer lock is a type of sessile dislocation which acts as a pinning point in forest hardening. Below is a an animation which explains the formation of Lomer lock.   Climb and cross slip Climb and cross-slip are the two dominant processes by which dislocations become organised and annihilate each other by dislocation interactions. Climb - Since the line and Burgers vector of an edge dislocation are perpendicular to each other, there is one plane in which the dislocation can slip. However, there is an alternative mechanism by which the dislocation can move to a different slip plane, known as climb. Climb is the mechanism of moving an edge dislocation from one slip plane to another through the incorporation of vacancies or atoms. Climb can be either positive or negative. In positive climb, the dislocation acts as a vacancy sink, absorbing a vacancy to shift itself upwards relative to its initial position. In negative climb, the dislocation acts as a vacancy source. The vacancy at the bottom of the extra half plane is replaced by an atom, which causes the dislocation to shift downwards. The following animation shows how positive dislocation climb occurs by the diffusion of vacancies around a crystal. Cross-slip Cross-slip is the movement of a screw dislocation from one allowable slip plane to another. Below is a video which explains how cross-slip work.  It should be noted that only **perfect screw dislocations** can cross-slip, as their line and Burgers vector are parallel to each other. Dislocations which have edge components can never cross slip. Partial dislocations in an fcc system - Cross-slip becomes more complicated for an fcc metal. In an fcc metal, a perfect dislocation tends to dissociate into two partial dislocations and, therefore, cannot cross-slip when it dissociates. To understand this, consider the atomic packing on a closed-packed (111) plane in Figure 4: ![](images/packing.svg) Figure 4: Slip in a closed-packed (111) plane in an fcc lattice. (Source of image: G.E. Dieter, Mechanical Metallurgy (1988), p. 155)The {111} planes are stacked on a sequence ABCABC…, and the Burgers vector  \( {b\_1} = \frac{a}{2}\left[ {10\overline 1 } \right] \) defines one of the slip directions. However, the same shear displacement can be accomplished by the two-step path b2 + b3. According to Franks rule, the latter is more energetically favorable. Hence, the perfect dislocation is decomposed into two partials: \[ \frac{a}{2}\left[ {10\overline 1 } \right] \to \frac{a}{6}\left[ {2\overline 1 \overline 1 } \right] + \frac{a}{6}\left[ {11\overline 2 } \right] \] Slip by this two-step process creates a stacking fault ABCAC\( \vdots \)ABC in the stacking sequence. The two partial dislocations, which are separated by the stacking fault, are collectively referred to as an extended dislocation. Since the extended dislocation has both edge and screw components, it defines a specific slip plane, in this case the {111} plane of the fault. Consequently, **the two partial dislocations are constrained to move in this plane and cannot cross slip unless the partials recombine to form a perfect dislocation again** (This recombination of two partials is referred to as constriction). It should be noted that while a pair of dissociated partials does need to be forced back together into a single perfect (screw) dislocation in order to be able to cross-slip, this need not happen along the complete length of the dislocation at the same time. What usually happens is that a local constriction (to a short length of perfect (screw) dislocation) is formed, this small section cross-slips onto the new glide plane, where it again separates into two partials (different partials from the original pair). Figure 5 shows an example where it is energetically easier for constriction to take place along a certain length than the complete length becoming a perfect dislocation, and cross-slipping, at the same time. ![](images/constriction.svg) Figure 5: Sequence of events envisaged during the cross-slip process. Four stages in the cross slip of a dissociation (a) by the formation of a constricted screw segment (b). The screw has dissociated in the cross-slip plane at (c) (Source of image: Hull and Bacon, 2011). Below is an animation showing how cross-slip happens in an fcc crystal.  The influence of stacking-fault energy on the availability of cross-slip It is important to note that cross-slip is more difficult in metals with a low stacking-fault energy (i.e. a wide stacking fault). This is because the partial dislocations, which are well-separated, cannot recombine to form a perfect dislocation to cross slip. For example, cross-slip is not observed in copper (which has a stacking-fault energy of 45 mJm-2, but is quite prevalent in aluminium (which has a stacking-fault energy of 166 mJm-2). Stacking-fault energy is also particularly important at relatively low temperatures, since climb is then very difficult and cross-slip is virtually the only mechanism by which dislocations can do anything other than glide on a single slip plane (which is quite a severe limitation in terms of a region trying to undergo a general shape change, which requires independent slip systems) Dislocation intersections to form jogs and kinks Introduction Commonly, dislocations are generated and move on more than one slip system simultaneously. These dislocations must therefore intersect each other, leading to the formation of jogs. Jogs refer to, confusingly, both a jog and a kink. A jog is a short section with length and direction equal to ***b*** of the other dislocation and lies out of the slip plane. A kink is a short break in the dislocation line which lies in the slip plane. The formation of jogs has two important consequences: 1. Jogs increase the lengths of the dislocation lines. Hence the intersection of dislocations involve the expenditure of additional energy. 2. Jogged dislocations will move less readily through the crystal, so they play an important role in work hardening. Formation of a jog Below is a video showing how a jog is formed when two edge dislocations intersect.   Formation of a kink - Kinks are formed when the jogs resulted froma dislocation intersection lie in the slip plane instead of normal to it. This can occur when two orthogonal edge dislocations with parallel Burgers vectors intersect each other. As kinks lie in the same plane, they do not inhibit movement of dislocation (i.e. they are glissile). Kinks may also assist dislocation motion, as atoms or vacancies diffusing to them can enable the dislocation to move at stresses below the critical resolved shear stress. In addition, they are often unstable since during glide they can line up and annihilate the offset. ![ntersection of 2 edge dislocations](images/intersection_2dislocations.svg) Figure 6: Intersection of two edge dislocations with parallel Burgers vectors. (Left) Before intersection; (Right) after intersection. (Source of image: G.E. Dieter, Mechanical Metallurgy (1988), p. 171) Contribution to work hardening ![Intersection of 2 screw dislocation](images/intersection_screw_dislocations.svg) Figure 7: Intersection of two screw dislocations. (Left) Before intersection; (Right) after intersection. (Source of image: G.E. Dieter, Mechanical Metallurgy (1988), p. 172). From the viewpoint of plastic deformation, the most important type of dislocation intersection is the intersection of two screw dislocations (Figure 7). The intersection of two screw dislocations produces jogs of edge orientation in both screw dislocations (line vectors of the jogs are perpendicular to the burgers vectors of the screw dislocations). ![Movement of an edge oriented jog on scrwe dislocation](images/jog_screw.svg) Figure 8: Movement of an edge-oriented jog on screw dislocation. The jog is constrained to move along the dislocation in plane AABB. (Source of image: G.E. Dieter, Mechanical Metallurgy (1988), p. 172) Since an edge dislocation can glide freely only in the plane containing its line and Burgers vector (Plane AABB), the only way the jog can move by slip (conservative motion) is along the axis of the screw dislocation. If the screw dislocation is to slip to a new position, such as MNNO, it can only do so by taking its jog with it by a non-conservative process such as climb. Because dislocation climb is a thermally activated process, the movement of the jogged screw dislocation will be temperature-dependent. At temperatures where climb cannot occur, the motion of screw dislocations will be impeded by jogs (i.e. the jogs are sessile) and so the crystal becomes work hardened. Deformation of a single crystal = ![fcc stress strain curve](images/fcc_stress_strain.svg) Figure 9: The stress-strain curve of a single crystal. The gradient of the linear region in stage ll is G/200. When a single crystal is plastically deformed, different behaviors are observed in its stress-strain curve depending on the strain. These behaviors are divided into three stages (Figure 9). Here, only stage l and ll will be discussed. Stage 1 - This is a stage of low linear hardening which may be absent, or account for as much as 40 percent shear strain depending on the testing conditions (mainly how much lattice rotation is required to start a second slip system deforming, see ). Since only one slip system is operative, the dislocations can glide easily without being impeded by dislocations from other slip systems. Hence, this stage is often referred to as ‘easy glide. In stage l, dislocations of a single slip system which are parallel to each other all glide in one direction. Experimental data has shown that there is a small but finite work hardening rate associated with this stage. This is due to the accumulation of dislocation debris in the form of dipoles. The extent of stage 1 depends on the purity of the crystal. If the impurities form a dispersion of second phases (e.g. silicon and iron phases in aluminium), stage 1 hardening is reduced or even eliminated. This is because the small inclusions encourage localized slip on other than the primary slip plane, so other slip systems are activated. On the other hand, impurities which form a solid solution with the crystal tend to enhance the extent of stage 1. Stage ll As the crystal is deformed, the tensile axis rotates towards the slip direction. Stage 2 is initiated when the tensile axis has rotated to a position where two slip systems share the largest Schmid factor. At this stage, two slip systems are activated, forming a secondary slip system in addition to the primary system which exists in stage 1. The dislocations that move on both slip systems can interact with each other to form jogs, locks and pile ups. Consequently, the crystal becomes work-hardened in this stage. Stage 2 work hardening is characterized by an approximately linear stress-strain curve whose slope (the hardening rate) is about G/200 (where G is the shear modulus), which has a mild dependence on temperature or strain rate (i.e. the hardening rate is constant). Using dimensional analysis, it can be shown that the fundamental relationship for the flow stress \( \tau \) in stage 2 is: \[ \tau = \alpha \;b\;\;G\sqrt \rho \] This is known as the Taylor equation. Here α is a dimensionless number, G is the shear modulus, **b** is the Burgers vector and ρ is the forest dislocation density of the system. The term α represents an average interaction strength between dislocations and its value depends on the inherent complexity of detailed dislocation theory. The interaction can vary from entirely elastic between dislocations with perpendicular Burgers vectors, to energy storing when intersection leads to formation of a jog. The magnitude is typically in the range 0.5 – 1.0. Forest hardening ![forest hardening](images/forest.svg) Figure 10: An active dislocation gliding in the primary slip plane. The pinning points are created by the intersections between the active and the forest dislocations.   It has been mentioned that two slip systems are activated during stage ll of the deformation of a single crystal. However, the plastic flow of the crystal is mainly governed by the primary slip system (i.e. it is an active slip system), where the primary dislocations can move freely. The dislocations in the other slip system, however, are immobile and are termed the **forest dislocations**.  Forest hardening is the dominant mechanism in stage ll of the single crystal deformation. The active dislocations gliding in the primary slip plane get stuck at obstacles when they intersect with the forest dislocations. These obstacles, or pinning points, are either jogs (when dislocations intersect each other) or Lomer locks (when dislocations react together).  During stage ll, the number of fixed obstacles will increase as more Frank-Read sources are operated, which leads to an increase in the number of forest dislocations. Deriving an expression for the constant hardening rate It has been mentioned that the hardening rate of stage ll is constant, with a typical value of G/200. We now aim to derive an expression for this hardening rate by considering the movement of an active dislocation in the primary slip system. Consider a segment of dislocation of length l pinned at both ends. The mean free path of the dislocation segment is λ. The change in the dislocation density with respect to the strain is: \[ \frac{{{\rm{d}}\rho }}{{{\rm{d}}\gamma }} = \frac{{{\rm{d}}l}}{{b{\rm{d}}a}} \] where dl is the change in the line length of the dislocation segment and da is the area swept by the dislocation as it moves. From Figure 10, we have dl = l and da = λl, hence: \[ \frac{{{\rm{d}}\rho }}{{{\rm{d}}\gamma }} = \frac{1}{{b\lambda }} \] Differentiate the Taylor equation: \[ \frac{{{\rm{d}}\tau }}{{{\rm{d}}\gamma }} = \frac{1}{2}\;\alpha \;b\;G\;{\rho ^{ - \frac{1}{2}}}\;\frac{{{\rm{d}}\rho }}{{{\rm{d}}\gamma }} \] \[ \frac{{{\rm{d}}\tau }}{{{\rm{d}}\gamma }} = \frac{1}{2}\;\alpha \;b\;G\;{\rho ^{ - \frac{1}{2}}}\;\frac{1}{{b\lambda }} \] \[ \frac{{{\rm{d}}\tau }}{{{\rm{d}}\gamma }} = \frac{{\alpha \;\;G}}{{2\;\lambda \;\sqrt \rho }} \] The mean free path λ is usually a small multiple, of order 10, of the mean dislocation spacing \( \frac{1}{{\sqrt \rho }} \) and so it follows that the expression above gives a reasonable value for the hardening rate of about G/200. Dislocation dynamics Dislocation dynamics aims to simulate the dynamic, collective behavior of individual dislocations and their interactions. The following video uses dislocation dynamics to simulate the behavior of dislocations when an fcc single crystal is deformed. Although it is a single crystal, it does not exhibit any easy glide and multiple slips are initiated at the start. Continuum models describing the true stress-strain curves = The continuum models which describe the true stress-strain curves, such as Ludwik-Hollomon and the Voce equation, are covered in the . Grain boundary hardening of poly-crystals = Compared to single crystals, poly-crystals tend to have higher yield stresses. This is because each grain in the poly-crystals has to undergo a complex shape change which is consistent with those of their neighbors, requiring multiple slips from the start. Therefore, unlike single-crystals, **poly-crystals do not exhibit any kind of ‘easy glide** when they are deformed. Below is an explanation of how grain boundary hardening arises in a poly-crystal:   Summary = This TLP has covered the following points: 1.New dislocations are generated from a Frank-Read source. The minimum shear stress required to operate a Frank-Read source, which is found by balancing the force on the dislocation and the line tension, is: \[ \tau = 2\frac{{Gb}}{d} \] Frank-Read source is important as it is the mechanism by which dislocation density increases as a material is work-hardened. 2.A dislocation can interact with either another dislocation or a solute atom. When dislocations interact with each other, they can either repel (if they have the same sign) or annihilate (if they have opposite signs). On the other hand, when a dislocation interacts with a solute atom, an energetically favorable arrangement is formed, leading to solute-solution strengthening.  3.A Lomer lock is a type of sessile dislocation formed when the plane which contains the line and Burgers of the edge dislocation is not a close-packed slip plane of the system. 4.Climb and cross-slip are the two processes by which dislocations can become organised and annihilate. Climb is the mechanism of moving an edge dislocation from one slip plane to another through the incorporation of vacancies or atoms. Cross-slip is the movement of a screw dislocation from one allowable slip plane to another. Cross-slip is favoured in metals with a high stacking-fault energy. 5.Jogs are formed by dislocation intersections. In particular, the intersection between two screw dislocations is crucial to work hardening. 6.The two stages of the plastic deformation of a single crystal are discussed. It is important to note that stage l only exists in single crystals, as only one slip system is operated (Poly-crystals do not exhibit stage l as multiple slips are initiated from the start). Stage ll, where the hardening rate is constant, is governed by forest hardening. The flow stress at this stage is given by the Taylor equation: \[ \tau = \alpha \;b\;\;G\sqrt \rho \] 7.Forest hardening arises when the active dislocations in the primary slip system are impeded by the forest dislocations in the secondary slip system. The pinning points, which are created by the intersections between the active and forest dislocations, are either jogs or Lomer locks. 8.The typical value of the hardening rate is \( G \)/200. The expression for the hardening rate is found by considering the movement of a dislocation segment: \[ \frac{{{\rm{d}}\tau }}{{{\rm{d}}\gamma }} = \frac{{\alpha \;\;G}}{{2\;\lambda \;\sqrt \rho }} \] 9.Poly-crystals have higher yield stresses than single-crystals as each grain needs to undergo a shape change which is consistent with those of their neighbor, requiring multiple slips from the start. In addition, the yield stress of a poly-crystal is related to the grain size by the Hall-Petch relationship: \[ {\sigma \_y} = {\sigma \_0} + \frac{k}{{\sqrt D }} \] Questions = ### Quick questions*You should be able to answer these questions without too much difficulty after studying this TLP. If not, then you should go through it again!*1. What is the ratio of the minimum shear stress required to operate a Frank-Read source for work-hardened copper (ρ = 1014 m-2) and annealed copper (ρ = 1010 m-2)? | | | | | - | - | - | | | a | 1 | | | b | 10 | | | c | 100 | | | d | 1000 | 2. Which of the following about cross-slip is true? | | | | | - | - | - | | | a | Cross-slip is favored by a high stacking-fault energy | | | b | Cross-slip is favored by a low stacking-fault energy | | | c | Edge dislocation can bypass obstacles via cross-slip | | | d | None of the above | 3. Which type of intersection is most important to work hardening? | | | | | - | - | - | | | a | edge-screw intersection | | | b | edge-edge intersection | | | c | screw-screw intersection | | | d | none of the above | 4. Which of the following is not a pinning site in forest hardening? | | | | | - | - | - | | | a | Lomer lock | | | b | jogs | | | c | kinks | | | d | precipitates | 5. Which of the following about poly-crystals is/are true? | | | | | - | - | - | | | a | Poly-crystals do not exhibit any easy glide | | | b | Poly-crystals tend to have higher yield stresses than single-crystals | | | c | Poly-crystals with smaller grain sizes have higher yield stresses | | | d | All of the above |### Deeper questions*The following questions require some thought and reaching the answer may require you to think beyond the contents of this TLP.*6. Explain why the intersection of two screw dislocations is important to work hardening. Going further = ### Books **The following books contain extensive information about Frank-Read source, jogs formation, Lomer locks and single-crystals deformation.** R.W.K. Honeycombe, *The Plastic Deformation of Metals*, Second Edition, 1984, ISBN: 0-7131-3468-2 W.F. Hosford, *Mechanical Behavior of Materials*, Second Edition, 2010, ISBN: 978-0-521-19569-0 G.E. Dieter, *Mechanical Metallurgy*, SI Metric Edition, 1988, ISBN: 0-07-100406-8 **For a more detailed and mathematical description of forest hardening and single-crystals deformation, consult:** A.S.Argon, Strengthening mechanisms in crystal plasticity, 2008, ISBN: 978-0-19-851600-2 ### Other resources A.D. Rollett, U.F. Kocks, *A Review of the Stages of Work Hardening*, 35-36 (1993) pp 1-18, U.F.Kocks, *A Statistical Theory of Flow Stress and Work-hardening* (1965) **Both contain a detailed discussion of forest hardening and single-crystals deformation.**
Aims On completion of this TLP you should: * understand the major interactions between X-rays and a crystal lattice * know how this phenomenon can be used to gain knowledge of the crystalline structure of the material * be aware of the techniques used to obtain and process X-ray diffraction data Before you start You may find it helpful to read the . You will find it beneficial to have knowledge of crystal structures, as this will enable a better understanding of the results of X-ray diffraction. This is covered in the TLP. You should also have read the and the TLP. Introduction X-radiation ("X-rays") is electromagnetic radiation with wavelengths between roughly 0.1Å and 100Å, typically similar to the interatomic distances in a crystal. This is convenient as it allows crystal structures to diffract X-rays. X-ray diffraction is an important tool used to identify phases by comparison with data from known structures, quantify changes in the cell parameters, orientation, crystallite size and other structural parameters. It is also used to determine the (crystallographic) structure (i.e. cell parameters, space group and atomic coordinates) of novel or unknown crystalline materials. In crystallography, measurements are expressed in Ångströms (Å). An Ångström corresponds to 1 x 10-10 m; so one Ångström is equal to 0.1 nm. Experimental matters ### Production and measurement of X-Rays The laboratory source of X-rays consists of an evacuated tube in which electrons are emitted from a heated tungsten filament, and accelerated by an electric potential (typically several tens of kilovolts) to impinge on a water-cooled metal target. When the targets inner electrons are ejected and outer ones fall to take their place, X-rays are emitted. Some have a continuous distribution of wavelengths between about 0.5 Å and 5 Å ("white radiation") and some have wavelengths characteristic of the electronic levels in the target. For most experiments, a single characteristic radiation is selected using a filter or monochromator. ### **Methods for obtaining Characteristic Radiation** The diagram below illustrates the characteristic X-ray emission spectrum that is obtained from a copper target. ![Didagram of characteristsic X-ray emission from Cu target](images/spectra.gif) The white and Kβ radiation can be reduced by *1. Using a filter which works by the absorption principle:* ![Characteristic x-ray emission using an absorption filter](images/spectra_with-Ni-Edge.gif) The Ni absorption edge is midway between the Kß and Ka lines so that the former is reduced very substantially while the latter is only marginally reduced.. In selecting the thickness of the filter, a compromise has to be reached between eliminating as much as possible of the undesired radiation and maximising the desired radiation. For other wavelengths other elements will provide suitable filters. *2. A monochromator that works on the principle of diffraction:*  During diffraction a monochromator (a single crystal of known lattice spacing and orientation) is placed in the path of the primary or diffracted beam. The monochromator is set so that the beam is diffracted and only X-rays with the required wavelength reach the detector. (See ). *3. Modern detectors also filter wavelengths (or energy) electronically* ### Detectors In the past most X-ray work was done with film, now electronic detectors are used. A single point (e.g. Geiger Muller, scintillation or proportional counter), a line (1D) or an area (2D) detector may be used. Bragg′s law = The concept used to derive Bragg's law is very similar to that used for Youngs double slit experiment. An X-ray incident upon a sample will either be transmitted, in which case it will continue along its original direction, or it will be scattered by the electrons of the atoms in the material. All the atoms in the path of the X-ray beam scatter X-rays. We are primarily interested in the peaks formed when scattered X-rays constructively interfere. (In addition, after scattering some X-rays suffer a change in wavelength. This incoherent scattering is not considered here). Constructive interference occurs when two X-ray waves with phases separated by an integer number of wavelengths add to make a new wave with a larger amplitude. When two parallel X-rays from a coherent source scatter from two adjacent planes their path difference must be an integer number of wavelengths for constructive interference to occur. Path difference = *n λ* Therefore; *n λ* = 2 *d* sin*θ* In order to consider the general case of hkl planes, the equation can be rewritten as: λ = 2 *d*hkl sin*θ*hkl since the dhkl incorporates higher orders of diffraction i.e. *n* greater than 1. The angle between the transmitted and Bragg diffracted beams is always equal to 2θ as a consequence of the geometry of the Bragg condition. This angle is readily obtainable in experimental situations and hence the results of X-ray diffraction are frequently given in terms of 2θ. However, it is very important to remember that the angle used in the Bragg equation must always be that corresponding to the angle between the incident radiation and the diffracting plane, i.e. θ. The diffracting plane might not be parallel to the surface of the sample in which case the sample must be tilted to fulfil this condition. (The concept of orientation will be dealt with later in this TLP). This is the procedure to obtain asymmetric reflections and work in transmission: Single crystal diffraction The simplest way of demonstrating application of Braggs law is to diffract X-rays through a single crystal. The simple teachingdiffractometer in the photo below projects a beam of X-rays onto the crystal. The diffracted beam is collated through a narrow slit and passed through a nickel filter. The counter is a Geiger-Müller tube.*Diffractometer (Click on image to view larger version)* *Top view of diffractometer (Click on image to view larger version)* The crystal used in this experiment is lithium fluoride. Assuming that the large flat face will be perpendicular to a particular crystallographic direction, this is set parallel to the line containing the source and detector at θ = 0.  The gearing of the counter arm is such that, once set, the θ - 2θ relationship between the incident, transmitted and diffracted beams is maintained. The video below shows manual operation and the location of the first diffraction peak. Watch the counter carefully. Your browser does not support the video tag. Diffractometer experiment with lithium fluoride crystalUsing the 2θ value observed at a peak of intensity, the known wavelength λ for Cu Ka, = 1.54Å and the Bragg equation, a value for the plane spacing (d spacing) can be determined. If the peaks can be indexed, i.e. assigned to scattering from certain planes, then from simple geometry lattice parameters can be calculated. This is shown later in the TLP. Determining lattice parameters accurately = When using diffraction data to obtain a lattice parameter, it is important to note the shape of the θ - sin θ curve: ![Diagram of sin theta curve](images/sintheta curve.gif) The largest gradient on this curve is at low values of θ. This means that a small error in the recorded angle of the diffraction peak will cause a significant error in the calculated lattice parameter. At high values of θ the error in the calculated sin θ value will be reduced. This leads to a smaller error in the calculated value of the lattice parameter. The same conclusion can be drawn by differentiating the Bragg equation. This implies that lattice parameters calculated from high angle diffraction peaks are more accurate than those taken from low angle peaks. Relationship between crystalline structure and X-ray data = ### Peak positions Using Bragg's Law, the peak positions can be theoretically calculated. \[\theta = \arcsin \left( {\frac{\lambda }{{2d}}} \right)\] For a cubic unit cell: d = \(\frac{a}{{\sqrt N }}\), where N = h2 + k2 + l2 and a is the cell parameter. (More complex relationships for less symmetrical cells are given in most standard text books) So the measured value 2θ can be related to the cell parameters. In the earlier video peak was observed at about 44.4. Knowing the wavelength, 1.54Å, and using Bragg's Law gives a d-spacing of ~2.04 Å. Some additional information is required to obtain lattice parameters from this d-spacing. Knowing that LiF has a cubic structure with a unit cell ~4.03, means that this reflection must be (002) (which is the same as (200) and (020)). ### Peak intensities The structure factor, Fhkl, of a reflection, hkl, is dependent on the type of atoms and their positions (x, y, z) in the unit cell. \[{F\_{hkl}} = {\sum\limits\_i {{f\_i}} \_{}}\exp 2\pi i(h{x\_i} + k{y\_i} + l{z\_i})\] fi is the scattering factor for atom i and is related to its atomic number. The intensity of a peak I hkl is given by: \[{I\_{hkl}} \propto {\left. {\left| {{F\_{hkl}}} \right.} \right|^2}\] The proportionality includes the multiplicity for that family of reflections and other geometrical factors. Differences in intensity do relate to changes in chemistry (scattering factor). However, most commonly for multiphase samples, changes in intensities are related to the amount of each phase present in the sample. Suitable calibration factors are required to perform quantitative phase analysis. ### Peak widths The peak width β in radians (often measured as full width at half maximum, FWHM) is inversely proportional to the crystallite size Lhkl perpendicular to h k l plane. \[{L\_{hkl}} = \frac{\lambda }{{\beta \cos \theta }}\;\;\;\;\;\; \rm{Scherrer\;\; equation} \] (Whilst small crystals are the most common cause of line broadening but other defects can also cause peak widths to increase.) In the next section there is a simulation which shows how changes in the structure of a simple cubic material influences the diffraction pattern. Powder diffraction A powder is a polycrystalline material in which there are all possible orientations of the crystals so that similar planes in different crystals will scatter in different directions. ![Diagram of powder scattering](images/powder-scattering.gif) Scattering in X-ray powder diffraction In single crystal X-ray diffraction there is only one orientation. This means that for a given wavelength and sample setting relatively few reflections can be measured: possibly zero, one, two (as in the video) or possibly up to say three or four. As other crystals are added with slightly different orientations, several diffraction spots appear at the same 2*θ* value and spots start to appear at other values of 2*θ*. Rings consisting of spots (spotty rings) and then rings of even intensity are formed. A powder pattern consists of rings in 2-dimensions, and spheres in 3-dimensions, of even intensity from each accessible reflection at the 2*θ* angle defined by Bragg's Law. The other situation which is intermediate between single crystal and powder diffraction is when the sample is oriented and the spots are spread into arcs. This is covered in the TLP. This animation shows the relationship between single crystal and powder diffraction, as measured on a 2-dimensional detector (such as film): ### An X-ray diffractometer The photograph below shows a typical powder diffractometer **![Photograph of labelled x-ray diffractometer](images/labelled.jpg)** The X-ray beam comes from the tube, through slits, is diffracted from the sample, goes through another set of slits, diffracted from the secondary beam monochromator and measured by the detector. The video below shows how the sample moves through *θ* (~5 to 45 ° ) while the detector scans through 2*θ* (~10 to 90 ° ). It has been speeded up as typical data collection time would be somewhere between 10 mins and 10 hours. Your browser does not support the video tag. The simulation below shows how the powder diffraction pattern of a simple face-centred cubic structure is influenced by changes in the cell parameter, atomic number, crystallite size and what happens when the material becomes amorphous. Phase identification ### Phase Identification Powder diffraction data are commonly used to identify or ‘finger print crystalline materials. An international database was started in the 1930s and is regularly updated. In its simplest form, PDF1 (powder diffraction file) lists d-spacings and relative intensities. Most data are now indexed and so include cell parameters, the chemistry, density and other properties of the material. This is called PDF2. It is maintained by ICDD (International Centre for Diffraction Data) formerly JCPDS (Joint Committee for Powder Diffraction Standards). To identify a particular phase both peak positions and relative intensities must fit. In general this requirement should hold for at least three peaks. Below are two examples of how this process works. The first one is a simple purity check on hydroxyapatite and the second a more complex example identifying three possible phases in stabilised zirconias. Similar logic applies to completely unknown samples. Oriented (or textured) samples These can be considered as intermediate between single crystal and powder samples. In a diffractometer scan the relative intensities of the peaks are intermediate between those of a single crystal and a powder. The change in relative intensities may indicate the orientation of the sample. This can be observed in the following simulation: In a complementary manner, orientation can be measured by recording how a reflection is spread at constant 2*θ*. An unoriented powder has rings of constant intensity, while an oriented sample has an arc or sharp spot. It is sometimes helpful to consider oriented samples in . Summary = Following completion of this TLP, you should have a basic understanding of the phenomenon of X-ray diffraction through a crystalline material. This package has explained how to use an X-ray diffraction experiment to reveal information such as what crystalline phases are present, their cell (or lattice) parameters, crystallite size and whether the phase is single crystal, oriented or a polycrystalline powder. The main aspects of collecting and analysing X-ray data in the laboratory have been covered. Questions = ### Quick questions*You should be able to answer these questions without too much difficulty after studying this TLP. If not, then you should go through it again!*1. In a simple X-ray scan, which of these affects the peak positions? | | | | | | - | - | - | - | | Yes | No | a | X-ray wavelength | | Yes | No | b | Crystallite Size | | Yes | No | c | Unit cell parameter | | Yes | No | d | Atomic number | 2. Which of these is **not** involved in the diffraction of X-rays through a crystal? | | | | | - | - | - | | | a | Electron Scattering | | | b | Crystallographic planes | | | c | Nuclear interactions | | | d | Constructive interference | 3. What is the smallest d-spacing that can be measured for a given wavelength λ? | | | | | - | - | - | | | a | 0.5λ | | | b | λ | | | c | 2λ | | | d | no limit |### Deeper questions*The following questions require some thought and reaching the answer may require you to think beyond the contents of this TLP.*4. What sort of improvement in precision might you expect in cell parameter calculations when increasing 2θ? 5. A crystal has a cubic unit cell of 4.2 Å. Using a wavelength of 1.54 Å at what angle (2 θ) would you expect to measure the (111) peak ? a. 10.6º b. 18.5º c. 43.0º d. 37º 6. For a sample with a crystallite size of 100Å and using a wavelength of 1.54 Å estimate the peak breadth, in radians and degrees, of a peak at 60° 2θ. 7. Given these experimental and reference data (A,B,C and D) which phases are: 1. Definitely present 2. Not observable (implies absent at any significant level) 3. Unsure In each case give a reason for your answer and when unsure consider whether you could do something to clarify the situation. ![](images/egto75-no%20babels_2.gif) Going further = ### Books * C. Hammond, *The Basics of Crystallography and Diffraction*, 2nd edition, OUP, 2001 * B. D. Cullity and S. R. Stock, *Elements of X-ray Diffraction*, 3rd edition, Prentice Hall, 2001 ***Websites*** * TLP on using reciprocal space to understand diffraction patterns. * A detailed about diffraction. Created by the Universities of Würzburg and Munich. * The ultimate reference for crystallography. Also has links to many other crystallographic websites. * . This site primarily hosts crystallographic software but also has links to web based teaching resources * Advanced Certificate in on the Web. * . Created at EPFL, Switzerland. This course introduces the basic concepts of crystallography and is freely available on the web for everyone. The symmetry of crystalline material and the properties of diffraction are presented by means of interactive applets. The description of structures is greatly facilitated by the combination of drawing tools and easy access to databases.